back to article As ChatGPT scores B- in engineering, professors scramble to update courses

Students are increasingly turning to AI to help them with coursework, leaving academics scrambling to adjust their teaching practices or debating how to ban it altogether. But one professor likens AI to the arrival of the calculator in the classroom, and thinks the trick is to focus on teaching students how to reason through …

  1. Neil Barnes Silver badge
    Holmes

    But when deeper thought was required, ChatGPT fared poorly.

    That might be - just guessing here, you understand - because it doesn't think?

    1. HuBo Silver badge
      Gimp

      Re: But when deeper thought was required, ChatGPT fared poorly.

      Yeah, it's hard to expect a portly model of language, with generous grammar, to do well on reasoning tasks ...

      Still, the study was limited to GPT-4 and partisans of reinforcement-learning-trained chain-of-thought test-time compute superintelligence AI couture, of the o1+ and GPT-4o persuasion, might argue that such magnificiously full-bodied and built-for-comfort models should also be given a Zero-shot and even a Multi-shot at this too, for good measure, as they are tailored for a better fit there ... unlike buttcrack sweatpant overalls -- this ain't plumber's school after all!

      The optimistic upshot would be that they may help "reinvigorate undergraduate education by reducing time spent on mechanical sewing tasks in favor of conceptual understanding and practical fashion engineering judgment", though quasi-periodic hallucinations, of minimal overshoot quaternion-based Hopf bifurcations, might yet remain, on a few differential geometric catwalks (Fig. 6) iiuc, imho!

      1. MattAvan

        Re: But when deeper thought was required, ChatGPT fared poorly.

        There is an interpretation that at least some of the hallucinations are just the AI lying deliberately for an easy win.

        1. doublelayer Silver badge

          Re: But when deeper thought was required, ChatGPT fared poorly.

          I'm sure there is, among people like you who insist on believing that they're conscious, intelligent entities when they're not. Among those who have spent at least five minutes understanding how they work, that interpretation is recognized as nonsense.

          1. MattAvan

            Re: But when deeper thought was required, ChatGPT fared poorly.

            Nobody "understands" a non-trivial neural network, not the way they understand well-organized C++ code. Large artificial neural networks are "understood" the same way large biological neural networks (brains) are "understood", as black boxes with certain behaviors.

            A brain surgeon can't tell if someone knows the lyrics to Hotel California, and neither can you by looking at the weights of an LLM, despite your five minutes of "understanding".

            Intelligence and consciousness are traditionally the emergent abilities of biological neural networks. Complex artificial NN's also exhibit emergent abilities for similar reasons. There is no ghost in the machine. There is no soul embedded in neurons. It is an open question where to draw the line for "intelligence" and "consciousness".

            1. Davicious
              Devil

              Re: But when deeper thought was required, ChatGPT fared poorly.

              So, if large neural networks and brains are "black boxes" as you say, how do you know there is no soul embedded in neurons?

              Perhaps you don't understand your own large neural network?

              1. MattAvan

                Re: But when deeper thought was required, ChatGPT fared poorly.

                Nobody can understand their own brain by definition -- because understanding would require a functionally complete model of the brain inside that very same brain. It is impossible.

                I believe in science, and I feel that the methodological naturalism of science has been vindicated over the centuries to the degree that humanity doesn't need to believe in ghosts anymore.

      2. Persona Silver badge

        Re: But when deeper thought was required, ChatGPT fared poorly.

        It's little different from those many many software engineers who pull bits of the code off the internet and mash them together with no real understanding of what's going on. Sometimes it works.

    2. anthonyhegedus Silver badge

      Re: But when deeper thought was required, ChatGPT fared poorly.

      Still, it doesn't think - it *simulates* thinking artificially. Do with that what you will.

      1. Conor Stewart

        Re: But when deeper thought was required, ChatGPT fared poorly.

        It doesn't even do that. We don't really understand how thinking works so we can't replicate it. These models show almost no ability to reason so they likely can't and just predict what the answer could be given its training data.

        1. Anonymous Coward
          Anonymous Coward

          Re: But when deeper thought was required, ChatGPT fared poorly.

          These models 'Pattern match' ... end of definition.

          The Surrounding manipulation of the 'Pattern matching' is the so called clever stuff !!!

          It is NOT 'thought' or 'intelligence' or even a weak simulation of 'thinking/reasoning' !!!

          Because it is 'Pattern matching' if you 'play' with the patterns it 'sees' the LLM can be broken or manipulated to give the answer you want.

          This is the reason that 'Guiderails' fail because the 'filtering' of patterns to protect 'where' the LLM goes is almost impossible to be 100% complete.

          It is a variation of the old line ...

          'Any technology sufficiently advanced can look like magic' <--> 'Any 'Pattern matching' sufficiently advanced can look like 'Intelligence' !!!

          :)

          1. MattAvan

            Re: But when deeper thought was required, ChatGPT fared poorly.

            Intelligence IS just sufficiently advanced pattern matching. There is no ghost in the machine. There is no soul embedded in neurons.

            Back when I took an AI course in college, I remember being deeply disappointed to read that approaches involving neural networks to simulate real intelligence had largely been abandoned in favor of cheap tricks and shortcuts (not their words).

            But we are back on track with deep neural networks this time. No clever manipulation of non-intelligence could produce the output generated by modern LLMs. I truly believe AGI of some description is only a matter of years now. And if it isn't, it is better to be prepared than otherwise.

            1. Groo The Wanderer - A Canuck

              Re: But when deeper thought was required, ChatGPT fared poorly.

              Not without an absolutely tremendous amount of work and shift of focus to context-aware models and long-term memory of evolving topics of discussion, such as working as a true development partner that remembers the design decisions on the project at hand from inception to date like a real programmer does.

              LLMs as they exist now are just statistical experiments in fraud. They are not "intelligent" on the least, with absolutely no awareness of the meaning or intent of what you are asking, just that statistically, this text should be barfed back in response.

              1. MattAvan

                Re: But when deeper thought was required, ChatGPT fared poorly.

                If you really think emitting statistically chosen token after token, without any reasoning, would result in coherent fan fiction with the characters you asked for, or a program that does exactly what you asked for, I'd say you are in denial.

                You are then setting up a false dilemma to further that denial, ie, the AI is either entirely non-intelligent OR it has to be as capable as a competent software developer. When much of humanity wouldn't pass that bar, they would fall into your excluded middle populated by people near one end of the bell curve.

    3. Groo The Wanderer - A Canuck

      Re: But when deeper thought was required, ChatGPT fared poorly.

      The problem with trying to teach students to recognize when the AI is wrong requires knowing enough about the possible answers to do so. We have the same issue with conspiracy theorists - they count on the audience being too uneducated to realize they're being fed a line of bullshit.

    4. nautica Silver badge
      Holmes

      Re: But when deeper thought was required, ChatGPT fared poorly.

      "AI has by now succeeded in doing essentially everything that requires 'thinking' but has failed to do most of what people and animals do 'without thinking'. That, somehow, is much harder. --Donald Knuth

    5. Hans 1
      Facepalm

      Re: But when deeper thought was required, ChatGPT fared poorly.

      I heard people say that aout Deep Blue.

    6. Sam not the Viking Silver badge
      Pint

      Re: But when deeper thought was required, ChatGPT fared poorly.

      In any technical environment, knowing when your bounds of knowledge have been reached is a skill. You might need to extend your knowledge and understand the implications.

      The quickest answer is not always the right one, rarely the best. Food for thought ------>

    7. Antron Argaiv Silver badge
      Gimp

      Re: But when deeper thought was required, ChatGPT fared poorly.

      That's where the "Artificial" in AI comes to the fore.

      I predict that there will be a huge market in a few years, for people smart enough to unravel and repair nonfunctional systems "designed" by AI.

  2. Jou (Mxyzptlk) Silver badge

    Yes, kill multiple choice tests!

    If the result of AI is to make multiple choice tests go away, something the USA is more fond of than Europe (where I am), China India and so on, that yes ! Finally a good use for AI! Get rid of that multiple choice crap!

    Only make questions that require actual thought! Something Europe, China and India are more fond of. Start teaching knowledge instead of "competence". Oh, and while we are at it, free for all, cause USA misses quite a number of geniuses from lower class families...

    As for the calculation questions: OK, I have to give that, "Midterm 1", "Midterm 2" and "Final exam" would have been easy questions for me 30 years ago for my Abitur (German variant of between High School and University level). Today I needed more than a minute on both to recognize "Oh, wait, all that is stuff I once learned and I was even good at it! At least I actually understand the questions."... Today it would take quite a while to solve those - I would go at it if I'd be in retirement, just for fun.

    Then design project is a bit beyond my capabilities, I would need more engineering training. Or do it the Adam Savage way: Build, iterate, correct, build again, iterate again, correct again until the result is good.

    1. Anonymous Coward
      Anonymous Coward

      Re: Yes, kill multiple choice tests!

      And a good thing they didn't use Agentic Google Gemini for this study ... it might have just outright killed the researchers' "ultimate lazy student", instead of the multiple choice test!

    2. Ian Johnston Silver badge

      Re: Yes, kill multiple choice tests!

      Only make questions that require actual thought! ... Start teaching knowledge instead of "competence".

      Questions which test knowledge are the very opposite of those which require actual thought. Answering knowledge questions is easy. Just Google them. Developing and testing competence is much harder and much more desirable.

      Background: Thirty years teaching engineering at university level.

      1. Jou (Mxyzptlk) Silver badge

        Re: Yes, kill multiple choice tests!

        I get your point. IMHO a definition problem: In my definition, which currently see more often, they teach the "competence to use google", whereas the knowledge is seen to apply what you learned and not recite / memorize. The latter are quite often bulimia learners. Maybe I should have written "understanding".

        Edit: And a question... How much did the students change in their intelligence or capability to understand? I hear quite a number of "Those GenZ" "Those young", but most young I am in contact with are on the very useful and impressive side.

        1. Ian Johnston Silver badge

          Re: Yes, kill multiple choice tests!

          Thanks. Hope I wasn't too snarky.

          I was using "competence" in the "matrix of competences" sense. That means that at course design time you write a list of everything you want the learners to grasp by the end - things they should know and things they should be able to do - and then make sure that everything is covered and assessed somewhere. That way you avoid repetition and omission ... with luck. I agree that just asking them to do something ain't enough. The Higher Education Academy doesn't like the idea of testing understanding, but I think the HEA are a bunch of useless fifth-raters so "understanding" is fine with me!

          I think the students I saw, particularly the young ones, were getting rather more spoon fed and rather less able to cope with the unexpected in exams, but that was over at least 25 years, so it's not a GenZ thing. In any case it often reflects a commendable change by educational institutions in making clear what learners need to know instead of asking them to read the institutional mind. I saw absolutely no evidence that students were getting any less able, and the overwhelming majority of those I taught were keen, motivated and honest in their endeavours.

          1. Jou (Mxyzptlk) Silver badge

            Re: Yes, kill multiple choice tests!

            > Thanks. Hope I wasn't too snarky.

            Oh no, that was fine. German can take directness and rational debate. You didn't go ad-hominem, no straw-man, no overgeneralization, no reduction to two choices and so on.

      2. Anonymous Coward
        Anonymous Coward

        Re: Yes, kill multiple choice tests!

        Just Google them.

        I actually agree with your comment but have you looked at the output from Google lately for a non current affairs or non trivial query?

        Some veritable garbage although frighteningly some of it is plausible if you weren't particularly familiar with the area.

        In your particular case you might give your students a design exercise in the high voltage lab.

        "We show 'em where the library is, give 'em a few chats and graduate the survivors"

        † Terry Pratchett's rather aptly titled A Collegiate Casting-Out of Devilish Devices

        1. Persona Silver badge

          Re: Yes, kill multiple choice tests!

          The first electronics lab session of my electrical engineering degree course involved a valve amplifier running at a painful but current limited and non-lethal high voltage. That lab had precisely nothing todo with the course syllabus. The perverse design of the apparatus seemed to have the sole purpose of luring fingers in the wrong place.

          It was years later that I realized that lab was survival training, before moving onto the really dangerous stuff in the heavy electrics lab where fingers in the wrong place could be lethal. That lab had open knife switches, despite them being obsolete, switching hundreds of volts at hundreds of amps. In retrospect I suspect they were survival training too as they dispelled complacency.

      3. M. Poolman
        Thumb Up

        Re: Yes, kill multiple choice tests!

        Yes. Absolutely. Definitely. 100%.

        Background: Almost the same as yours!

    3. MachDiamond Silver badge

      Re: Yes, kill multiple choice tests!

      "Or do it the Adam Savage way: Build, iterate, correct, build again, iterate again, correct again until the result is good."

      Adam goes into a project with a good plan to start with. Not perfect, but good. He then knows how to evaluate what he's done and knows how to make the improvements. It's the whole twister going through a scrap yard and tossing out a 747 if you have no clue. Ate'nt gonna happen.

  3. NapTime ForTruth
    Mushroom

    Deus Ex Machina

    "And sometimes maybe the thinking is, 'hey, should we actually even be teaching this anymore?'"

    The answer to this question is almost always "Yes", because education is not strictly about getting the right answer but about understanding *why the answer is right* and *what to do with the answer now you have it*. Without that understanding no learning occurs, it's just blind acceptance and recitation.

    When we fail to understand how and why the machines - be they mechanical or electronic or digital or quantum - and their results work, we fall victim to the machines, we become their hostages. Much worse that that, though, we become stupid, less knowledgeable and less curious about the world around us and less able to interact with or change it.

    There's the old story about a village built around a machine that has all the answers. The machine was gifted to them by a neighboring village. One day there is unrest, rumblings of discord and even war among the various villages. The local villagers gather around the machine and ask what should be done. The machine replies "5".

    "Five!", the villagers exclaim, "We must five!"

    Many voices repeat the number, "Five! Five everyone! We must five!" Heads nod, there is much congratulatory back-patting, smiles of relief and agreement. "We need only to five."

    And an old person, the village crank, a generally disagreeable pessimist of sorts who routinely complains about change and novelty and the weather, says, "What the hell does five mean, you bumbling nitwits? How do we 'five' something?"

    The question is met with equal parts confusion and jeers, "Ah, you always find something to complain about, you old crank! Never happy with the status quo, always doing things the hard way!"

    "Look, we'll just ask the machine!"

    And they do ask the machine how to five.

    The machine replies, "Four".

    "Three".

    "Two".

    ...

    1. Like a badger

      Re: Deus Ex Machina

      And an old person, the village crank, a generally disagreeable pessimist of sorts who routinely complains about change and novelty and the weather, says...."

      Like all of us, you mean?

      1. NapTime ForTruth

        Re: Deus Ex Machina

        You are my people!

        1. The Oncoming Scorn Silver badge
          Mushroom

          Re: Deus Ex Machina

          Malachi : The time has come to convert the unbelievers!

          Bender: Convert them?

          Malachi : To Radioactive vapor!

    2. Anonymous Coward
      Anonymous Coward

      Re: Deus Ex Machina

      "Without that understanding no learning occurs, it's just blind acceptance and recitation."

      You might have a look at US politics and the type of voter that is most valued by the now ruling party, if not both parties. Also think about how this party looks at universities as a source of critical thinking.

      You will quickly realize that tough questions from students are not welcome. We are talking about people who desperately want to teach creationism.

      A second feature of the US system is the desire for low cost testing, and lots of it. Hence all the multiple choice and factoids teaching and testing.

      For those who want to "understand" the US teaching philosophy, I would suggest to read the entertaining A Mathematician's Lament by Paul Lockhart.

      1. MotorcycleBoots

        Re: Deus Ex Machina

        It's a long read, but I enjoyed the start, and will probably finish it later. Thanks fo the link.

        I remember a maths teacher when I was in primary school gave us a challenge to work out the interior angle of a regular polygon when you know the number of sides, though he didn't use those words, then gave us pencils and paper and left us to it. We weren't allowed to measure until after we'd come up with a possible solution. I'm now nearly retired, and still have great memories of that one lesson all those years ago.

        1. HuBo Silver badge
          Gimp

          Re: Deus Ex Machina

          Well, here's me digest of the last 2 pages (spoiler alert!), about the Standard School Math Curriculum:

          "A set of procedures akin to religious rites, holy tablets, mindless drudgery, no pains will be spared to make the simple seem complicated, a senseless half-baked bouillabaisse of analytic crises and systematic obfuscation".

          Perty much nails it to the S&M⁰ pommel horse of great joy, imho!

          ( ⁰⁻Science-&-Math of course ... what were you thinkin' !?!? )

        2. rg287 Silver badge

          Re: Deus Ex Machina

          Yes, I faintly recall a maths lesson where we were all clustered around a table deciding how to apportion seven chocolate bars between the five people sat at the table.

          I was dispropotionately pleased with myself at being the one to figure out "cut them all into five pieces and everyone takes one piece from each bar". Everyone else was still trying to figure out "they get a whole bar and then <fraction> of the remaining bars" and trying to furtle the fraction - which was what I was doing until 7/5 popped into my head. The concept that the top number could be bigger than the bottom one? Mind-blowing.

          I do agree with the study's author though. A teacher can't cram everything into 90 minutes/week and cover all maths from first principles. People probably thought log tables were cheating at first - calculate it yourself! Then we moved to calculators, and then software packages.

          Understanding what's actually going on with sine or tangent is really quite important - but also won't get you a higher score (under current testing systems) than just knowing the key sequence and treating "Sin" as a black box.

          The hard part is figuring out which bits we need from first principles and which bits can be taken as "well yes, you understand the principle now so just run the formula in excel and have at it".

          It's also a fact that the majority of people don't actually do trig calculations after they leave school, and a more functional use of teaching time would be drilling (compund) interest, percentages, statistics and probability into people so they understand how to manage their finances and how the papers are lying/scaremongering them.

          1. Ian Johnston Silver badge

            I have a doctorate in applied maths and spent close on forty years doing maths professionally. In that time I did not do a single long division.

            1. doublelayer Silver badge

              I'm guessing you had access to plenty of tools that could divide when you needed it, and that you did plenty of short division, the unofficial version they don't teach students but is useful for knowing whether your calculator result is logical. It's not that everything students are taught will be used in exactly the same form forever, but that they need to learn it in order to have the basic skills. For example, the stereotypical problem for computer programming students is writing sorting algorithms. In practice, most professional programmers won't write a sort; they already have one in the libraries they're using. Those who do aren't going to need to know about ten different ways they could sort, because most of those are more inefficient than something else, so they're likely to use quick sort, merge sort, or radix sort, and they rarely have to even choose among those because which one you use is directly related to your resource availability and data format. Only in particularly weird cases will they write something nonstandard. Students aren't taught to write and analyze sorting algorithms because they'll need to write and analyze sorting algorithms. They're taught it so they know how to write programs in general, and more importantly because they will need to analyze the performance of things that haven't already been subject to decades of research to improve them. The sorts are there as an example that demonstrates technique.

              Adherents of LLMs seem to think that education cares about the product of the students' effort. It doesn't. It cares only about how they learn, and their work is used to make them learn things and check whether they have.

              1. M. Poolman
                Thumb Up

                Spot on!

                "Students aren't taught to write and analyse sorting algorithms because they'll need to write and analyse sorting algorithms. They're taught it so they know how to write programs in general, and more

                importantly because they will need to anayse the performance of things that haven't already been subject to decades of research to improve them. "

                I'm nicking that for my next module description!

            2. MachDiamond Silver badge

              "I have a doctorate in applied maths and spent close on forty years doing maths professionally. In that time I did not do a single long division."

              Pffft, maths majors!

              I did my degrees in engineering and use long division to this day often enough since the simpler stuff can be done that way in my head faster than I can whip out my phone, enter the code, twiddle to the calculator app (RPN, of course) and fat finger the wrong numbers, three times. It's good practice. When the bill is $11.34, I might give the cashier $21.34 and they try to hand back the $1.34 until I make them punch it in.... Oh, the change is a tenner. It's even more mind blowing to them when I angle to get quarters back (for the laundry/car wash). I can do that sort of thing quickly since long long ago in a galaxy far far away, I worked a till that didn't have a tendered/change function. I also use the program to make a fast estimate of the change I'm supposed to get back to make sure I'm not being diddled.

              1. Ian Johnston Silver badge

                Long division is just a mechanical way of dividing. One cheap calculators came along it was about as essential as log tables or Napier's bones. Absolutely fine for those who want to use it/them, but no longer a necessary skill.

                I wonder how many of the politicians who regularly call for long division to be taught can actually do it, or know how it works.

              2. jimklimov

                "Yes!" on fat-fingering (or creaky sticky rubber buttons of older physical calculators that did not always make contact exactly once per intended depression), and in a way an argument to discussion raised by the article: during my school/uni days our generation passed the spectrum from avoiding calculators as an unholy evil, to getting the job done as you would at work - can use lecture notes (sometimes textbooks) and tools on exams since you'd do same in a lab or office. Not so much in the field work, on customer site, or when shopping though, where your head is all you have for facts and calculations.

                Back to the point of calculators - bad buttons helped them lie, so in the mixed-acceptance approach we were allowed to use but trained to not trust them, and so mental maths was still practiced that one would sufficiently grasp the expected order of magnitude of the answer, and probably the digits that should be seen there, to worry about retrying the calculation in a timely fashion - ASAP.

                Tired, undereducated or outright cheaty cashiers at shops or canteens around the campus also helped practice the quick mental maths, where you only have a couple of seconds to complete the task or forever hold your silence. Maybe that was done on purpose? ;)

            3. Anonymous Coward
              Anonymous Coward

              "I did not do a single long division."

              A Taylor series expansion instead? ;)

              Actually long division is essentially multiplication and subtraction which is invariably taught as a mechanical process without conveying any understanding of what is happening even though it's really pretty obvious.

              In truth basic manual arithmetic isn't used much by the polloi after their leaving school and the pre calculator (slide rule & log tables) generation necessarily learnt the art of estimation which is often more than accurate enough in real life. Certainly more accurate than blindly trusting an electronic calculator. I once purchased a $10 calculator from a supermarket which actually gave the wrong answers on division. Being the village crank of 5 fame I returned it for a refund. Oddly the supermarket Jonnie wasn't in the least interested in my demonstrating the fault. (Politely told to take my $10 and sod off. ;)

              1. Ian Johnston Silver badge

                Re: "I did not do a single long division."

                It's actually quite useful for dividing polynomials, but that's not how it's generally thought of.

              2. jlturriff

                Re: "I did not do a single long division."

                It's interesting to consider that computer programmers use only + - / × for calculations. Anything more complicated than that is handled by functions either built into the language or called from libraries; and guess what? Those are all implemented using + - / × as well.

                1. Ken Moorhouse Silver badge

                  Re: Those are all implemented using + - / × as well.

                  Without refreshing my study of this kind of thing, I would say that everything can be done using + (with or without carry), left shift and right shift. In terms of the underlying hardware logic components, everything can be built using either NAND or NOR gates. The TTL system's basic component is the 7400, which consists of four two-input NAND gates in one package. In addition (damn the pun) a system-wide gating pulse is needed to give the system some kind of memory.

                  https://www.electronics-tutorials.ws/sequential/seq_2.html

          2. doublelayer Silver badge

            Re: Deus Ex Machina

            Wouldn't it have been simpler to give each student one unchopped bar and only cut two of the bars into five segments? Also, if it was easy to chop the bars into equal fifths, then it shouldn't have been too hard to cut 2/5 segments from the bars and hand those to all but one of the students, the remaining one receiving the remaining fifths.

            1. Anonymous Coward
              Anonymous Coward

              Re: Deus Ex Machina

              Or take two bars as a commission for the arbitration.

              1. jimklimov

                Re: Deus Ex Machina

                - How much is "2 * 2"?

                - Are we buying or selling?

            2. rg287 Silver badge

              Re: Deus Ex Machina

              Wouldn't it have been simpler to give each student one unchopped bar and only cut two of the bars into five segments?

              In that specific example yes. But (through the hazy mists of time) I think the point of the exercise was to think about fractions in the general sense. There were a number of different questions posed - I just remember the one where I had a lightbulb moment!

              The previous examples were simpler and calculated directly, so people were continuing to try and optimise, which is actually what you've done - optimised for the specific example rather than thinking about the abstract problem. Each person got a whole bar and then two bars left over - so how did you divide them whole? Cutting one in half and the other in thirds clearly wasn't equal. It was that lightbulb moment that you cut the remainders (or all of them) into as many pieces as there are people and each person takes multiple pieces.

              Of course you can then reduce that to a whole bar and taking two one-fifth pieces, but that's a concrete calculation of "how do you cut the bars" once you've solved the abstract problem of "how much does each person get?".

              There might have been one where there were only 4 bars between the 5, so people were trying to work out how you described lopping one fifth off the end of all four bars and giving one person a bunch of bits whilst the others got a solid 4/5 bar. Such are the minds of 10 year olds. One person was doing it linearly and trying to work out, that you take 1/5 off the end of the first bar, then you cut the next bar into a 3/5 and 2/5 segment, the third bar is the other way round... which does involve the fewest cuts but also is also very literal and requires a bunch of measuring.

              What Mr Cooper was leading us to was abstracting and simplifying the problem to get a neat 4/5 or 7/5 fraction, which you could then apply as required.

        3. The Oncoming Scorn Silver badge
          Pint

          Re: Deus Ex Machina

          My old (Very old & still alive & reported as living by his daughter via FB, when pictures of the staff & schools come up) Maths teacher is still creating teaching tools & toys for mathematics.

          Icon - Here's a glass to Mr Spencer.

          Brits of my age will instinctively jump to the correct conclusion about his first name without my having to mention it.

      2. MyffyW Silver badge

        Re: Deus Ex Machina

        I also recommend Zen and the Art of Motorcycle Maintenance where teaching in the US is an interesting tangent in the author's own journey.

        1. The Oncoming Scorn Silver badge
          Thumb Up

          Re: Deus Ex Machina

          I prefer that to the ten book epic Zen And The Art Of Going To The Lavatory.

      3. Anonymous Coward
        Anonymous Coward

        Re: Deus Ex Machina

        For those who want to "understand" the US teaching philosophy

        I initially misunderstood this to mean the US teaching of philosophy rather the philosophy of teaching.

        I fear the intended second meaning has also contaminated most of, at least the English speaking, world.

        In future philosophy graduates might be the only ones worth hiring (and survivors of the high voltage lab. :)

        Over the last three to four decades I have noticed that humanities students and graduates were increasingly more articulate and capable of presenting a cogent, well reasoned and curiously more practical argument than their peers in the sciences, engineering etc.

        This in a period when the humanities have also been under an unremitting siege with whole University departments† closing as a result.

        † thinking Durham Linguistics department ca 2003.

    3. Ian Johnston Silver badge

      Re: Deus Ex Machina

      Is that a longer version of ...

      Doctor: I'm afraid this is terminal.

      Patient: How long have I got?

      Doctor: Ten.

      Patient: Years? Months? Weeks? Not ... days?

      Doctor: Nine.

      1. Anonymous Coward
        Anonymous Coward

        Re: Deus Ex Machina

        Doctor: Ten.

        Patient: Years? Months? Weeks? Not ... days?

        Doctor: Nine.

        I thought it was Tom Baker speaking here. :)

        1. Ian Johnston Silver badge

          Re: Deus Ex Machina

          Who?

    4. Anonymous Coward
      Anonymous Coward

      Re: Deus Ex Machina

      I guess literally Deus ex machina might mean [a] god from (out of) a (the) mechanism which is dreadfully close to the claims of AI/LLM pedlars.

      Off course the phrase normally in a literary context is very cheap plot device to extract the author from corner into which they had painted themselves.

      The "rules" of classic crime fiction largely preclude the device. (The "No Chinaman" ones.)

      The second meaning is closer to the reality of the false gods of AI/LLM.

  4. Boris the Cockroach Silver badge
    Boffin

    Maybe

    course work is not the be all and end all that some in the education system seem to think it is.

    And that results by exam(s) would be better (no smartphones, dumb calculators only)

    Icon because students need to learn how to deal with a problem while under time pressure and no backup (debugging some non working code while the boss is screaming about a non functioning website/database/robot/flight control/nuclear reactor safety system is also a good teacher of howto work under pressure... )

    1. Jou (Mxyzptlk) Silver badge

      Re: Maybe

      > while the boss is screaming

      You should have read enough "Who Me" and "On Call" to know to go away from such boss as soon as possible. Possibly the day he starts screaming like that. Depending on the country you live in you actually can. As far as Germany is concerned: If the boss misbehaves you can go away instantly as well, called "Außerordentliche Kündigung", based on gross misconduct.

      1. Anonymous Coward
        Anonymous Coward

        Re: Maybe

        Not to be confused with "Außerordentliche Überstellung" I think ... maybe ... just sayin' ... ;)

        1. Jou (Mxyzptlk) Silver badge

          Re: Maybe

          Well, you just made an "Außerordentliche Unterstellung!"

      2. M. Poolman

        Re: Maybe

        Even if the boss isn't screaming you should be able to work under pressure when the shot hits the fan. Y'know like in a job where the stuff you do actually matters and there will be real consequences if you screw up.

    2. doublelayer Silver badge

      Re: Maybe

      However, since we're talking about IT specifically, there are a lot of things that can't be tested in the time they generally let you close students in a room without letting them out and a lot of things that would never be done in that environment. If they're to write some relatively complicated code, then they will need more than a few hours and it's totally fine if they want to use a man page to get a function's specification or a web search to figure out what Clang meant when it emitted its cryptic error*. Most of the time, using an LLM to write it for you would be the same as asking your friend who already knows this stuff to do so; it might work, but you won't learn by doing it and it's therefore worth forbidding.

      * Yes, theoretically they might eventually need to write code without access to the internet, meaning they can't get help understanding some of those. However, most of the time, they will have access. If the company's network is down, they can use their phone to check something like that. Also, the point of letting them now is to give them the knowledge so that, when they actually don't have access, the chance is good they'll understand it because they've encountered something like it before.

      1. rg287 Silver badge

        Re: Maybe

        If the company's network is down, they can use their phone to check something like that.

        Quite.

        Some people will say this is cheating. To which I say: show me the binder full of comprehensive documentation. Enterprise software and big iron shipped with docs (and at a certain level still do).

        If it wasn't cheating to read the binder in 1986, it's not cheating to look at the online documentation (or the forums that pass for documentation these days!). If people genuinely think there's a likelihood that a person will be without internet access or their phone... then are the relevant systems adequately documented offline and why doesn't your office have a filing cabinet full of well-thumbed manuals for the libraries, software tools and compilers that you use?

        Of course there's always a place for knowing stuff off the top of your head - that's just basic competence (in the same way knowing basic times tables are just quicker and easier than reaching for a calculator). But even experts know when to check the manual.

        And this also harks back to the original comment deriding the idea of coursework. This depends on the level of education of course, and the nature of the subject.

        There is no point making a Masters student in Climate Forecasting do an exam that involves running calculations by hand. What are you proving? In an hour that could run maybe some of the calculations to determine the forecast for one pixel in a model. The formulae for that stuff would cover more than a side of A4 when printed. There's a reason you just call the library in MatLab. My degree was in natural sciences and the professors doing research that involved crunching bigger data sets (satellite, long time-series, etc) couldn't just recite the algorithms they used off the top of their heads - they were extensive and complex. Of course sat in front of a screen they could explain every term and value and what the algorithm was doing, but nobody does that stuff by hand, and they haven't since at least the 1970s.

        So an exam would inevitably be essay-based to test understanding of the topic, and if you want to see if they can actually apply that knowledge to build/extend a meaningful model, then that's going to be a project/coursework, with a robust viva to check their understanding of the project they're presenting (to ensure that it's not just AI-generated).

    3. MachDiamond Silver badge

      Re: Maybe

      "Icon because students need to learn how to deal with a problem while under time pressure and no backup (debugging some non working code while the boss is screaming about a non functioning website/database/robot/flight control/nuclear reactor safety system is also a good teacher of howto work under pressure... )"

      Back in Black, the autobiography of Terry Pratchett has him telling about his job at a newspaper where there is NO "writer's block". Your editor is glaring at you and sending the message, "Get the story written!" While Douglas Adams could enjoy the sound his deadlines made as the went whizzing past, when I wrote articles for a magazine (mostly made photos), I had concrete wall deadlines where the sound would have been "splat" and then absolute silence.

  5. Pete 2 Silver badge

    Less than perfect

    > ChatGPT scores B- in engineering

    There is an old joke: What do you call a student who always came last in their classes at medical school?

    Ans: doctor!

    A B- is not at all bad as degrees (or even just undergraduate courses) go. It seems like a false equivalence to require machines to always return perfect results. Whether AI, autonomous vehicles or any other endeavour where human performane [sic] at a lower level is deemed adequate

    1. Jimmy2Cows Silver badge

      Re: Less than perfect

      Like it or not, when putting something into a computer one expects it to give a correct answer. A calculator wouldn't be any use if it gave a wrong answer that seemed convincing, but you then had to check the result by some other method. AI is no different, and since AI is being pushed heavily as the answer to everything, that answer damn well better be correct every time. Anything less is of limited to no use, depending on your situation. And certainly not worth the cost and hype currently expended.

      1. Pete 2 Silver badge

        Re: Less than perfect

        > Anything less is of limited to no use, depending on your situation.

        Human development was held back (by an estimated 1000 years) as european dogma insisted that the Aristotelian view of the world was the only plausible / acceptable explanation.

        If AIs get things wrong, then it seems pretty obvious that was because the core data they were trained on was lacking. And many, many, individuals will have used that same faulty data themselves.

        The difference is that AI outputs can and are examined so the flaws in their training can be corrected. Hopefully that won't take 1000 years.

      2. MattAvan

        Re: Less than perfect

        AI is not a better calculator, it is a fallible, but intelligent assistant with a large knowledge base who may have a calculator or not.

        So it is a matter of whether you could tolerate an assistant who sometimes gets things wrong or outright lies, but still knows nearly everything about anything.

    2. Dante Alighieri
      Boffin

      Re: Less than perfect

      I can assure you that is not true. Been there. Qualified.

      Interesting views re MCQs - sometimes you have to test base knowledge. Other parts examine reasoning and interpretation.

      I google daily to review/confirm/learn - without the core knowledge AND problem solving i would be a much poorer doctor (Attending MD to the left pondians).

      I know where to look, how to balance data sources and critically review.

      And I still stand by : I can teach a bright 18yo to do any part of my job. You just need a shitton to cover it all - and together they can't do what I do - I can switch taska and jump specialities and synthesise.

  6. trevorde Silver badge

    Real world engineering

    Having been a professional mechanical engineer in a previous life, there are some things ChatGPT will never be able to do:

    - placate client about project being late

    - make sense of customer's conflicting requirements

    - explain to boss why project is over budget

    - beg workshop to drop everything and do 'urgent' job

    - drink beer with fabricators on site

    1. MachDiamond Silver badge

      Re: Real world engineering

      "- make sense of customer's conflicting requirements"

      I could see that as a good question on an exam prefaced by a short story about the job. You have to get the thing done, the client is unavailable and you will just have to decide which conflicting requirement is going to be tossed out (and your reasoning behind the decision).

  7. Flak
    Go

    Calculator and non-calculator exams

    Adaptation is needed similar to what has happened in maths exams.

    Any work outside a controlled classroom environment must assume that AI is used by a student. 'How to' should be part of the curriculum and examination.

    In an exam setting, this can be controlled for and non-AI questions could be assessed.

    Both need to be taught and practiced and should be examined and graded. That way, there is still 'value' in education.

    1. John Robson Silver badge

      Re: Calculator and non-calculator exams

      "why elementary school students are taught to do mental math"

      Because even in a calculator exam you should be doing an estimation before you tap things into a calculator...

      1. Anonymous Coward
        Anonymous Coward

        Re: Calculator and non-calculator exams

        Being able to estimate the answer to within a few percent is a useful skill that a lot of people seem to lack.

        I seem to remember saying to a client a few years back "Either 95 percent of your customers died in 2017 or your numbers are seriously f**ked up"

        They insisted their figures were correct (even though to me they seemed wildy out). They accepted there may be something wrong when totalling up their figures for 10 years suggested that two thirds of the population of the UK had died after purchasing their product (despite sales figures confirming that less than 0.5% of the population bought anything from them)

    2. druck Silver badge

      Re: Calculator and non-calculator exams

      If every second time you pressed equals on a calculator it just made up the result, we'd all still happily be using slide rules and log tables.

      1. John Robson Silver badge

        Re: Calculator and non-calculator exams

        I have a slide rule on my desk... well actually on my laptop keyboard (I use an external keyboard, cos the laptop is up on a stand) I haven't yet got around to learning how to use it, but it's on my list of things I want to do soon.

  8. david1024

    Yes

    All the approaches here sound good together. Just like a marginal or poor colleague you have to be able to spot what is right and what is not in the work produced. To me that's including the AI output in the assignment and proving what's good and why along with the garbage.

    Folks reading this likely have used AI or at least seen it, and it has trouble with too many unrelated input parameters and multiple outputs.. (to me, that's why it crushes multiple choice questions). At least for now, and that's another problem, eventually AI may be able to beat all the question formats.

    Finally, if a current AI can get a B... The course evaluations are trash to begin with and should be reworked. Really.

    1. doublelayer Silver badge

      Re: Yes

      The course evaluations probably aren't trash. How well an LLM can do on a course is very closely correlated to how well a student copying off the internet can do. There are a lot of courses where doing the work properly requires thinking but it is thinking that plenty of students have already done. As an example, consider computer programming classes. The solutions to these are very much not multiple choice, but while they're introductory courses, there are only so many simple problems that can be set to teach the basics. The answer to "write a function in C++ which takes a two-dimensional array of integers and determine whether it's a valid Sudoku board" is not a simple one, but it's one which you can find online if you want to copy. The LLM can too, so chances are its version will be just fine.

      To prevent that, someone would need to come up with truly novel questions that are equally doable with the limited knowledge students of that level of experience have. It is not easy. Most attempts are likely to merely scramble the question in such a way that the LLM has trouble parsing it. So let's do that, problem solved. Not really, because by making things more verbose, we run the risk of confusing students as well and foiling our attempts to teach those who are trying to do the work.

      Computer programming is just one example. The same applies to most things involving essays. A student in secondary school writing an essay explaining the effects of the Munich Agreement isn't likely to come up with anything that historians didn't already consider, but that's not the point and their grade is not and shouldn't be based on originality. Designing a curriculum that an LLM can't pass is similar to designing a curriculum where a student can't cheat. You probably can't, and if you manage it, you've probably lost plenty in order to achieve it.

      1. HuBo Silver badge
        Gimp

        Re: Yes

        "scramble the question in such a way that the LLM has trouble"

        Yeah, makes me wonder of the degree to which red herrings and counterfactual variants may help there (somewhat related also to Benn Jordan's data poisoning approach to foiling AI sloppery). Plenty of avenues for the AE 353 "Aerospace Control Systems" folks to investigate in future (aside from painfully changing everything, with paddles ... so enjoyingly!).

        Another aspect of the Ornik study though is that it suggests GPT-4 is already "proficient" in material from the prerequisites of AE 353 (MATH 225-intro matrix theory, MATH 285-intro diff eq, TAM 212-dynamics, and their own pre-reqs), as seen in Figs 2 to 4 -- a kind of MATLAB/SIMULINK State-Space question. An LLM trained over such material is expectedly overly obese for most folks' practical purposes ... which explains a lot about the unnecessary heft of the tech, and its demand for a sedentary cloud-based lifestyle, imho. Plus, it really doesn't help that it then gets its bulgy love handles all tangled up in Hopf bifurcated undergarments on this type of question, as shown in Fig. 6 ...

        1. doublelayer Silver badge

          Re: Yes

          That's what I advise against. Confuse the text enough and you'll confuse students. I had one professor who did that. Not deliberately, he was just terrible at making himself understood. It would make it harder to cheat. You'd also get your homework back with an odd patchwork: 23 problems with a perfect score and 2 zeros because you solved some other problem. What you solved was clear enough, but evidently it's not what he asked for so no credit. Sometimes, you couldn't even figure out what he wanted even knowing that whatever you thought it was wasn't right. After a bit of this, I compared results with classmates, and about ten of us had all done the same wrong thing and gotten penalized for it.

          You can rewrite the questions over and over until pasting them into common LLMs make them spit out a wrong answer. The problem is that, at the very best, you've added a reading comprehension task of indefinite difficulty to every assignment you leave them. More likely, you think you've done that but there are now errors in your phrasing that you don't know about.* That is unfair to students and it is your fault.

          * Imagine, for example, being required to manually obfuscate code to make it harder to read. There's software that can do that automatically, but you don't have that. Your tools are a text editor a compiler, and manual testing. How much could you do to the file and be confident that you definitely didn't change the behavior of the resulting program in the slightest? The same is true when you're rephrasing text. It's just a lot harder to see because it all still looks fine. Assignments are just code that gets interpreted by a more flexible compiler.

          1. HuBo Silver badge
            Gimp

            Re: Yes

            Yeah, no one would want Professor amanfromMars 1 teaching all required courses in the curriculum ... and yet, if he didn't exist (technically), that might leave the overall student experience a bit insipid for lack of a surprising flavor, however tangy and traumatic ...

  9. anthonyhegedus Silver badge

    Calculators

    We were always told "you won't always have a calculator with you" and in a sense that's correct. Sometimes you won't have a phone or calculator and being able to understand how to make calculations on paper is a very important skill for that reason alone, and also because it teaches you how things like mathematics work. We still need scientists and mathematicians who know how it all works.

    Now with AI, we could say the same thing, except whilst calculators liberated us from having to manually do calculations or use log tables, AI "liberates" us from thinking. What's the point in taking hours to think something through when an AI can do it in minutes or seconds, and come up with a "good enough" answer?

    And therein lies the problem. "Good enough" will be good enough. People will lose their ability to do critical thinking and reasoning. Already people are getting worse at navigating because of satnavs in cars. We are in real danger of losing some of our cognitive abilities, through disuse.

    I must say, I use AI, but I use it with caution. I still read things through, despite the temptation to copy and paste. And it does save me time. I can see the reasoning powers of AI getting better month by month and it's kind of getting scarily good sometimes.

    But I do fear that this will gradually erode our cognitive abilities. And we've all read and seen enough scifi to know the way that goes...

    1. claimed

      Re: Calculators

      So the editor of El Reg has no ability to write articles, because all they are doing is taking the output generated by the authors and doing the final pass checks?

      I think there will be plenty of space for thinking, critical thinking especially, and just as good authors tend to have read a lot of books, the skilled humans will be reading more than writing, and that’s ok

      1. SomeRandom1

        Re: Calculators

        I see that as the problem - no-one is taught critical thinking and so cannot determine if the machine is even close to correct. When people don't understand how to get an answer, they can't even guess if it's correct.

        As AI simply regurgitates / fabricates the data poured into it, then how will we ever advance if people are unable to produce the next step forward? Humanity will simply stagnate as the machine cannot do it for them.

        1. anthonyhegedus Silver badge

          Re: Calculators

          This is what I fear. Humans simply regurgitate facts and data learnt over their lives too. But AI simulates that, and in some cases so well that it's not easily distinguishable. Without creative minds, yes, we will stagnate

      2. Roland6 Silver badge

        Re: Calculators

        It’s not the current editor I’m worried about, it’s their successors. If you have brought up on LLM “good enough” output, I suggest your ability to sift through stuff and identify the meat is going to curtailed.

    2. MrBanana Silver badge

      Re: Calculators

      I was at that age in school where the transition from slide rules & log tables was made to electronic calculators. They were banned at first, but then gradually accepted as their cost went down and availability increased. What struck me at the time, even as a child, was the rush to accept them as the ultimate truth, without any consideration for accuracy. And by accuracy, I mean the ability of the user to operate the thing correctly, and paying heed to a result that looks a little suspicious. How many miles in 346 kilometres? - click, click, click, writes down 5536. No alarm bells ringing that the calculation might be backwards, or the decimal point is having a day off? One of the greatest skills I learned as school was the ability to make a rough, ball park approximation to the answer, before totally trusting a calculator.

    3. MachDiamond Silver badge

      Re: Calculators

      "Already people are getting worse at navigating because of satnavs in cars."

      The dumb ones are getting worse. I have a satnav in the car and use it all of the time. The problem is that these days I do plenty of work on new construction where there isn't an official registered address yet. For a long trip with limited support, it's important to be able to find an alternate route when the satnav isn't being helpful (drive across lake/take ferry that is not operating). Behind my seat, I have maps printed on paper. In a box, I have more map books I can take with me if I plan a trip further away that I might usually go. A satnav is a useful tool, but knowing what to do when it's on the blink or forgetful is important. The really concerning tales are where somebody is using Google Maps and relying on there being mobile data for the whole route. I 'could' use that, but prefer my standalone satnav that isn't reporting my whereabouts in 5 minute increments (or whatever).

      Just like any sort of problem in school, the most important lesson is how to set up a method to get to the proper solution more than the answer itself. In high school, the chemistry teacher required the use of a calculator since the mechanics of arithmetic weren't the focus of the class, but accurate answers were important. Just like knowing the difference between H2O, H2O2, H2SO4 and the stuff that comes out of the tap.

      1. Anonymous Coward
        Anonymous Coward

        Re: Calculators

        ... and unicode is pretty swell too (even survives copy/paste): H₂O, H₂O₂, H₂SO₄ ... much more convenient than sub-/super-script tags (easily screwed up): H2O, H2O2, H2SO4 ...

  10. Anonymous Coward
    Anonymous Coward

    Wikipedia?

    As reported, the introduction of calculators meant exam assessment placed less merit on the ability to calculate and more on the ability to know what to calculate (which was an extension of the adjustment when slide rules started to be allowed in lieu of log tables). When power tools were introduced to joinery workshops, apprentices needed to learn how to use them as well as the traditional hand tools. Assessments need to assess the ability to use the appropriate tools, so of course things need to change.

    Isn't this just an extension of the Wikipedia problem? By that I mean students use it as an authoritative source - which it isn't. When I was tutoring I told my students they should certainly use it as a source of ideas - but to then go and check if those ideas are actually right. If I saw Wikipedia in the list of references in an assignment, it would immediately lose all marks allocated to referencing (usually 10%).

    1. Jou (Mxyzptlk) Silver badge

      Re: Wikipedia?

      You've got to reference the reference Wikipedia refers to.

      1. Ian Johnston Silver badge

        Re: Wikipedia?

        Wikipedia's referencing problem is that far too often it works the wrong way round. "Wikipedians" decide what opinion they want to support (NPOV my arse), choose references to support it and ruthlessly revert any dissenting views and supporting references. That means that even when the references given can be verified they - and Wikipedia - are useless for academic purposes because of the bias which has gone into their selection.

        1. ecofeco Silver badge

          Re: Wikipedia?

          Wiki is good as a VERY general reference, but I've seen, first hand everything you've described and I now often cross check Wiki with other sources.

          Have to.

  11. An_Old_Dog Silver badge

    We're Already Boned

    ... because I increasingly see/hear people educated in technology writing and saying things such as, "AI's increasing ability to reason..."

    These are people who should know LLMs and generative AIs do not reason.

    1. ecofeco Silver badge

      Re: We're Already Boned

      There are a LOT of savants passing themselves off as intelligent and many other savants who just accept it.

      1. nautica Silver badge
        Holmes

        Re: We're Already Boned

        “Never confuse education with intelligence; you can have a PhD and still be an idiot.”

        ― Richard P. Feynman

    2. MattAvan

      Re: We're Already Boned

      Generative AIs use deep neural networks. Human brains use deep neural networks. AIs learn. Humans learn. Both learn by altering their internal neural pathways in subtle ways.

      There is no way AIs can produce the things they do without real (and improving) intelligence behind it.

      Those in denial are holding on to some sort of ghost in the machine, presumably. Immaterial souls embedded into neurons and urging them on.

  12. Primus Secundus Tertius

    Devilish thinking

    As Ambrose Bierce almost put it in his Devil's Dictionary: ChatGPT: that with which we think we think.

  13. Ian Johnston Silver badge

    I spent many years as an academic teacher and examiner in engineering and as an Academic Conduct Officer, investigating and when necessary penalising cases of plagiarism and collusion.

    Her's a brief take. The one hour talk costs money but is well worth it.

    Too many academics are lazy and set assignment questions which can be answered with a quick Google. However, universities have Google do, as well as Turnitin, and we can generally catch any substantial amount of copied material. It's easy enough to evade that: all you have to do is rewrite the material in your own words, using reasonably good English. However, since that's basically what we want you to do anyway we don't mind. It's generally easier to write the stuff yourself, but if you want to waste time paraphrasing and rewriting, fill your boots.

    Human-powered (often it's thousands of Filipino graduates) sites like CourseHero will effectively do the rewriting for you, but they cost money and that barrier to entry is enough to deter most students. ChatGPT, on the other hand, reduces the cost to practically nothing and so the need to paraphrase is no long an academic or financial hurdle, and the lazy examiners are screwed.

    What's the answer? Basically "ask better questions". If what you are asking can be retrieved by a web search you shouldn't have been asking it in the first place and you deserve all you get when students copy the answers from somewhere else.

    Brief take, so just one example. One particular course was generating huge numbers of cases with a question which started Explain the meaning of the five words underlined in the following paragraph.. At my suggestion they changed to Give examples of how the concepts described by the five words underlined in the following paragraph affect your daily life.- Plagiarism reports for that assignment fell twenty-fold, partly because the new question was much harder to copy and also because it was a much more interesting and engaging one for students.

    In general, though, we had very low levels of cheating. Most of our students wanted to make an honest stab at the work. Compare that with medicine, where 80% of all doctors claim to have learning difficulties ... ah, that sweet, sweet extra time in exams.

    1. O'Reg Inalsin

      " ChatGPT, on the other hand, reduces the cost to practically nothing ..."

      Ironically, the 5 billion ChatGPT is expected to lose this year is a few orders of magnitude more than that earned by the humans powering "CourseHero".

    2. Dante Alighieri
      Boffin

      Bullshit on your final stat.

      citation needed - xkcd passim

      DOI did those exams.

      1. Ian Johnston Silver badge

        Bullshit on your final stat.

        Not mine. It's from a BMA report on the profession as a whole. It perhaps explains why doctors are so willing to sign GenZ off work indefinitely for "anxiety" and the like.

    3. MachDiamond Silver badge

      "Human-powered (often it's thousands of Filipino graduates) sites like CourseHero will effectively do the rewriting for you, but they cost money and that barrier to entry is enough to deter most students."

      It was mail-order way back when I was in Uni and yes, price was a barrier. The upside was for some courses, the bibliography that came with the paper was gold. There wasn't the internet to look things up on so if the library card catalog was a bust, buy a paper so you don't pull a poor grade and have to spend more money repeating the damn course. Copying said paper and handing it in was not a good idea. There were so few sources back then that profs were likely to have seen those papers a few times if they've been teaching a while.

      1. Ian Johnston Silver badge

        I had one student who confessed to the university that he had bought an essay from an essay mill.

        Why did he confess? Because a year after he made the purchase, the mill in question (based in Birmingham) contacted him to say that unless he paid them a thousand pounds they would send us copies of all his correspondence with them, the material they supplied and the receipt for his payment. He paid up.

        A year later they contacted him again. Another thousand pounds, same threats. This time he worked out that the blackmail would probably be indefinite, and confessed to us.

        Moral: People who make a living helping students to cheat may not have high moral standards. Whoulda thunk it?

  14. O'Reg Inalsin

    Even if college coursework does include a wide shallow component, that's not all bad - just knowing the existence of tools to use is just the start of a longer and deeper process of applying them to real world problems, often in new and original ways.

  15. Philo T Farnsworth Silver badge

    Career choice

    Ornik explained, "What we said is, 'Okay let's assume that indeed the students are, or at least some students are, trying to get an amazing grade or trying to get an A without any knowledge whatsoever. Could they do that?'"

    And at what year in the future will they finally retire from the United States Senate?

  16. Ken Moorhouse Silver badge

    one professor likens AI to the arrival of the calculator in the classroom

    IIRC when I did the WordPerfect Certified Trainer course there was a trick question regarding the use of WordPerfect for calculation in tables where, if you blindly followed the instructions, you got an invalid answer due to arithmetic overflow of some sort. The candidate had to use their brain to figure out that the result was nonsense, and how to adjust the method to give the meaningful answer that was required.

    I remember when decent calculators cost around £100. Who remembers Metyclean in Victoria Street which had all the latest gadgets? Sinclair (of C5 fame) arrived on the scene and started flogging Scientific calculators at a fraction of the cost. Only problem was that it would give a supposedly valid answer to say, doing the arcsine of, say 1.5, or the tan of 90 degrees, whereas the more expensive ones would throw an exception. Savvy lecturers could, of course, build such traps into their homework questions.

  17. nautica Silver badge
    Boffin

    Maybe the problem here is one of understanding the REAL problem.

    Not sure I understand the problem...

    Went through 290++ credit hours of undergrad and graduate engineering work at a premiere science and engineering school. Never had a test / exam (multiple choice; essay; single calculus problems whose solutions required three pages or more in the provided test booklet) which was not given in the classroom, and which was not hand-graded.

    Taught engineering / math for twenty years. First sentence of the course syllabus for every course taught read: "Absolutely no electronic devices (calculator; phone; etc.) of any kind are allowed while taking any test / exam in this course...". (This meant extra work, of course, since the exam / test had to be crafted such that it was 'do-able' with the students' readily-available math / arithmetic skills). All tests / exams were given in-room; none were ever "take-home".

    Perhaps, in retrospect, the problem is easily understood: it's not the students who are lazy these days, but those who would "teach" them.

    1. doublelayer Silver badge

      Re: Maybe the problem here is one of understanding the REAL problem.

      The problem is not with tests. It's not too hard to prevent cheating on things where you control the environment. The problem is that not all courses can do all their evaluations in the form of tests which, at their longest, might be composed of five-hour chunks. Sometimes, a student needs to do something that takes longer than that, meaning you have to let them do it somewhere else. If they're doing a research paper, then they need access to research materials, time to read all of them and figure out which bits are actually important, time and tools to do new analysis based on their research, tools which will include computers most or all of the time, and just writing the resulting paper will take longer than they get as a tester. I think your focus on mathematics and related courses may have caused you to think that what works to test students there will work for other subjects when it often doesn't.

    2. MachDiamond Silver badge

      Re: Maybe the problem here is one of understanding the REAL problem.

      "This meant extra work, of course, since the exam / test had to be crafted such that it was 'do-able' with the students' readily-available math / arithmetic skills"

      There has to be some assumption that the student is ready to do the course work if there are pre-requisites. I've had to do catch up work when I found myself unprepared to keep up. About half the time I also found other people in the class were also trying to catch up which is the teacher assuming too much. You either rise to meet the challenge or it's another term of taking that class again.

      1. nautica Silver badge
        Happy

        Re: Maybe the problem here is one of understanding the REAL problem.

        "This meant extra work, of course, since the exam / test had to be crafted such that it was 'do-able' with the students' readily-available math / arithmetic skills"

        ------------------------------------------------------------------------------------------------------

        To be absolutely unambiguous as to what was meant (in the original comment), this sentence should have read (with emphasis added here, for clarity):

        ..."This meant extra work, of course, for the professor when writing an exam / test, since the exam / test had to be crafted such that it was 'do-able' with the students' readily-available math / arithmetic skills"...

  18. Grindslow_knoll

    Allow 'failure'

    Grading is imperfect as a measure of education, especially if you're not allow to fail anyone (a B- in a N.American university is 'bad' because of curving).

    You can have oral exams with written preparation, without calculators, on topics such as engineering, optics, networking, where your grade is determined by how long you hold out when the prof questions you, an hour is a 14/20, two is 16/20, and so on, formulas provided so you don't have to memorize those, but focus is on understanding. But then you can't have classes of 250 students, that only works for classes of size 15-25 at best, so your tuition model needs to change.

    But that is also a cultural thing, do you send your kids to university to get the degree no matter what, or do you send them to see if they like the subject and do well in it, and if not, and they don't do well, allow them to do something else without going bankrupt?

    Failure is the best learning experience. Another model is to teach the material in the first half of the course, have a brief exam (never multiple choice unless you also have to write down a half page reasoning), then the remainder is a real world project where you go from knowledge to wisdom.

    If you want independent, critical thinking, reward that, if you want exams and courses that test 'meets checkbox x', then that too has its place, but then expect to compete with tools.

  19. MachDiamond Silver badge

    Weighting shift

    If students are using AI to do their homework for them, the grade emphasis needs to shift more towards quizzes and tests. Somebody getting full marks on their homework and falling flat on their face on tests are likely outsourcing their thinking.

    The next AI system should be called "Gaspode", the thinking-brain dog for Foul Old Ron.

  20. SundogUK Silver badge

    "He said the present feels a lot like the dot-com bubble around the year 2000."

    This implies that AI will have its wobbles but then become successful and omnipresent. Which is not necessarily true.

  21. Anonymous Coward
    Anonymous Coward

    Go back to regain the future

    Just up the share given to exams, as an example:

    * 10% on a solo project/essay/whatever

    * 10% on a team project/essay/whatever

    * 10% on an oral defence/exam

    * 30% on an open book exam, no technology other than a scientific (non-graphing) calculator

    * 40% on an closed book exam, no technology other than a scientific (non-graphing) calculator

    They can cheat all they want, but now they cannot pass unless they actually know their stuff.

  22. jlturriff

    Different teaching paradigms

    I grew up in Canada in the 1950s-60s, and I think I learned a lot more then than I did later in the US, because Canadian tests (in all subjects) involved numerous essay questions, but US tests had mostly multiple-choice and true/false questions. The essay questions required me to show my understanding of the subjects, so I learned (in addition to the subject) to pay more attention to the teacher and the lesson; after moving to the US, where it was much easier to guess answers than to explain my understanding, I lost the learning discipline that I had used in the past.

    I think that a big problem with today's education systems is that they aren't focused on teaching students to think, but on teaching them to pass tests.

  23. imanidiot Silver badge

    Speaking as someone holding a Ba Sc in Mechanical engineering, I would say there's few things in Engineering that you could cheat on with "AI". You're not going to be doing your calculus exams through AI and show your work for deriving integrals or differentials. You're not going to cheat on your controls systems exams reading or writing a bode plot. I highly doubt this is actually a problem in the majority of engineering disciplines, especially since it can be avoided by just having proctored exams.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like