back to article AI programming assistants mean rethinking computer science education

While the legal and ethical implications of assistive AI models like GitHub's Copilot continue to be sorted out, computer scientists continue to find uses for large language models and urge educators to adapt. Brett A. Becker, assistant professor at University College Dublin in Ireland, provided The Register with pre- …

  1. Mike 137 Silver badge

    An unmentioned (or unmentionable) issue

    Quite apart from the issues discussed here, there is another serious potential issue. Relying as these tools do on samples of extant code rather than on understanding of algorithms and methods, there is a distinct possibility that they will contribute to the perpetuation of poor practice (e.g. insecure coding). This is not a fantasy -- it already retards improvement of public standards in several spheres, the content of which is based on consensus of current practitioner opinion.

    1. Anonymous Coward
      Anonymous Coward

      Re: An unmentioned (or unmentionable) issue

      Well we don't know yet is the honest answer.

    2. Michael Wojcik Silver badge

      Re: An unmentioned (or unmentionable) issue

      The article did raise that possibility, though it didn't discuss it at length. I don't know if it comes up in the paper by Berger et al. (haven't read it).

      This was already a problem with, for example, the simplified code fragments used for illustration in programming textbooks, which often omitted input sanitization and error checking for clarity; with open source, from which a certain type of developer would habitually crib; and with resources such as StackOverflow.

      I agree, though, that more automation exacerbates the problem. The easier it is to find a solution, the less likely some developers1 are to search for a good solution.

      1As always, this is a question of economics. With any sort of labor, if you want quality, you have to provide an incentive for it. That includes, on the one hand, rewarding it – often by inculcating a culture of quality so the reward is at least partly intangible – and on the other not penalizing it, for example by over-rewarding short time to completion or other naive measures of productivity.

  2. Pascal Monett Silver badge

    "Programming Is Hard – Or at Least It Used to Be"

    Programming is still hard. I see that every time I explain basic Excel functions to a new class. There's always at least one member of the audience whose eyes glaze over.

    Low code ? You still need to know the result you want to obtain, and that's where many people fail to achieve the desired result.

    I'm not saying that you need to be more intelligent to program. I'm saying that, to program, you need a certain mindset and, if you don't have it, you're going to have a lot of trouble trying.

    It's like mathematics. I've never been good at maths. I cannot count the number of times when I told people that I am a programmer and they answered "well you're good at math then". No, I'm not. A programmer doesn't use math, a programmer uses logic.

    There may be logic in mathematics, but I've never been able to understand it beyond basic calculus.

    This low-code, AI-assisted stuff ? It's just going to create another nightmare like Access databases and Excel spreadsheet infestations.

    1. Mike 137 Silver badge

      Re: "Programming Is Hard – Or at Least It Used to Be"

      "A programmer doesn't use math, a programmer uses logic."

      It might be validly argued that what we generally refer to as logic (e.g. propositional calculus) is a branch of mathematics. But be that as it may, I use both logic and "math" in programming according to the demands of the problem to be solved. Indeed most of my programming was at one time dedicated to implementing applied mathematical algorithms (e.g. Fourier transforms). However, in general the mind set required for mathematics is the same mindset as is required for logic and consequently for programming in the real sense (that is, not just stringing together library methods and borrowed code fragments, which is what the "AI" seems to be doing).

      1. Michael Wojcik Silver badge

        Re: "Programming Is Hard – Or at Least It Used to Be"

        what we generally refer to as logic (e.g. propositional calculus)

        I don't think that propositional logic is what most people refer to as "logic". It might be what some specialists have in mind, but the majority of the population seem to use "logic" to mean something like "conscious reasoning" but following certain patterns which include informal versions of conjunction, disjunction, and implication; and for some people other aspects of informal logic such as logical fallacies and rhetorical theory.

        And propositional logic is only the tip of the logic iceberg. After that there are existential predicates (first-order logic), second- and higher-order logics, doxastic logic, modal logic (of which doxastic can be considered a particular case, though doxastic is of special interest because in itself it's a complete formal system), and so on.

        is a branch of mathematics

        Yes, formal logics are very definitely part of mathematics. That is, they are formal abstract systems for expressing and manipulating sentences which are tautologically equivalent.

    2. Anonymous Coward
      Anonymous Coward

      Re: "Programming Is Hard – Or at Least It Used to Be"

      On the other hand it could drive the use of legacy technologies more quickly because what you ask the AI to build can be largely language agnostic.

      All of of us here know that one guy that comes in twice a year to update his access database based excel macro garbage for the accountants and wish he would fuck off. This crowd is currently in the process of moving over to Blazor which I think is the next tech that will stick around for 20 years longer than it should and be a pain in the ass in the years to come.

      You know who you are, and on behalf of everyone here...no I will not just "allow all macros"...and no, a single massive SQL query that dumps everything into Access so you can get to the data from a VBA macro without a proper database query is not a solution.

    3. steviebuk Silver badge

      Re: "Programming Is Hard – Or at Least It Used to Be"

      The maths comment is good to hear. Back in the late 80s, early 90s while still in high school, in the UK I did part time work at a hotel where my mum worked. I remember specifically being behind the bar (wasn't allowed to serve obviously) and my mum was saying "He wants to be a programmer, but you do it, don't you require maths?" she said to someone working there. He said "Yes, you'll need maths to doing programming". I disagreed. Got into college and eventually got learning Pascal. Thought he was right cause I've always struggled. After the 5 years of college course I got to the end and decided programming wasn't for me. Maybe it is a maths issue. But glad to here its not. 2000 I finished all my courses and just a engineer instead. Started to look at Python again though, understand it a bit better than I did in college, now looking at Powershell though.

  3. Neil Barnes Silver badge

    "In other contexts, we use spell-checkers...

    ...grammar-checking tools that suggest rewording, predictive text and email auto-reply suggestions – all machine-generated,"

    Some of us don't. To me, a spelling checker is a handy *review* tool though its inability to identify the wrong word correctly spelt can be an issue; a grammar check reflects the style guide and biases of the group that wrote it; predictive text is a menace in almost *all* situations; and email auto-reply is an impertinent intrusion into my chain of thought. But maybe that's just me (and yes, as it happens my Master's thesis was based on identifying whether words which are *not* in a dictionary are correctly spelt).

    On the main subject: it looks as if computer science classes will devolve from how to write correct code to how to ask the computer to write the correct code for you... and how will you know it has?

    1. Mike 137 Silver badge

      Re: "In other contexts, we use spell-checkers...

      "it looks as if computer science classes will devolve from how to write correct code to how to ask the computer to write the correct code for you"

      "Computer science" classes that just teach "how to write correct code" aren't computer science classes -- they're programming classes. Computer science is vastly more than training in programming. It's an entire discipline covering the math and principles of computability, architectures, algorithms etc. and the history of all that. Coding (even programming) is a very small part of the real computer science curriculum. But just as programming has become "coding", the scope of the curriculum continues to contract to accommodate a mass market of potential students who look on education as merely a passport to employment as opposed to the acquisition of capabilities. Sadly that motivation benefits increasingly commercialised educational establishments as head count equates to revenue but costs of delivery escalate in proportion to subject depth and breadth.

      1. Blank Reg

        Re: "In other contexts, we use spell-checkers...

        I've been trying to hire for roles requiring hard core algorithm ability and have struggled to find people. having interviewed over 50 candidates only one passed the technical test. I took the same test and recall wondering if I'd missed something as it seemed too simple.

        I get people with a PhD that can't string together much more than hello world and others that seem like they can't understand any algorithm more complex than bubble sort. Something is wrong with how we are teaching computer science if this is the result

        1. Michael Wojcik Silver badge

          Re: "In other contexts, we use spell-checkers...

          Computer science degree programs are going to differ among institutions, and the current accreditation guidelines here in the US are somewhat outdated and not really suitable for ensuring program quality or results – they're more about providing a baseline and providing a bit of credibility to distinguish between real and junk degrees.

          And students vary widely, and the courses of study they choose, within their degree programs, vary widely.

          Baccalaureate CS degrees (in the US) don't claim to denote any particular ability to understand complex algorithms. It's a general degree in the area of computing theory, computer technology, and programming. I've known people with CS bachelor's degrees who were not computer scientists in any way but excellent software developers, and I've known others who couldn't write code worth a damn but were off to a good start in theory.

          Now, it would be reasonable to hope that a CS PhD would have some facility with algorithms; but even at that level there will be considerable range, and there are plenty of research fields in CS which are not oriented to understanding algorithms.

          That said, people who work in CS education and care about it – Mark Guzman, for example – are certainly in agreement that CS pedagogy needs a lot of work, and that most departments and teachers aren't paying a lot of attention to the extant research and curriculum development. Some academic fields are relatively sensitive to pedagogical concerns (in the US, at the university level, composition and ESL are examples); others are less so, prioritizing other kinds of work.

    2. captain veg Silver badge

      Re: "In other contexts, we use spell-checkers...

      The first thing I do after setting up a new machine or upgrading software is *turn off* all the spill chucking and -- especially -- checking grammar features. Just STFU about me using the passive voice, I'm writing technical report, not a novel.

      -A.

      1. Michael Wojcik Silver badge

        Re: "In other contexts, we use spell-checkers...

        Well, and this is one of the major problems with grammar/usage/mechanics/style "checkers": even when they analyze text correctly, they're applying an extremely coarse and dubious set of heuristics. They can help some writers in some circumstances, but returns diminish rapidly for authors who are well-trained or attentive to matters of usage and style, or for writing situations with conventions that don't match the assumptions of the team that built the checker.

        Even style guides written by human experts are problematic. Richard Ohmann's classic "Use Definite, Specific, Concrete Language" punched a hole in the style-guide concept in 1979, and most people have yet to get the memo.

  4. This post has been deleted by its author

  5. Howard Sway Silver badge

    The boffins say AI tools can help students in various ways

    "The authors also see advantages for educators, who could use assistive tools to generate better student exercises, to generate explanations of code, and to provide students with more illustrative examples of programming constructs."

    Sounds like the problem is not students wanting to use these tools, but teachers wanting to use them. If your exercises aren't so good, write better ones. If you can't explain code, you shouldn't be teaching the subject.

    Teaching coding is about explaining the principles, and how to use them to build real programs. There's no place for these AI tools at all with good teachers and textbooks. All this worry about "kids cheating" is overblown in my opinion : if they want to learn programming because they are interested in it, then they will. If, in the highly unlikely case that it ever actually happens, they pass some course entirely by cheating and getting the AI to do the work, they will be completely fucked if they ever have to do a real programming job because they'll be quickly found out.

    1. Version 1.0 Silver badge
      Alert

      Re: The boffins say AI tools can help students in various ways

      Ever since AI appeared I have always seen it as Artificial Idiots, sure it can work but there is no guarantee that it must work. I don't see AI as "bad", I just don't see it as error-free. Teaching students to never assume that everything works, unless they work to verify it, is educational.

      1. captain veg Silver badge

        Re: The boffins say AI tools can help students in various ways

        AI isn't bad, per se.

        Half the human population is of below average intelligence. Why shouldn't the artificial variety be the same?

        In fact, were it to reach that rarefied median level we should probably think it an achievement.

        AI seems to me to be an attempt to emulate the infinite number of monkeys, equipped with typewriters, and given enough time, inevitably producing the works of Shakespeare. We haven't got an infinfinite number of monkeys available. Will cloud computing do?

        -A.

    2. Anonymous Coward
      Anonymous Coward

      Re: The boffins say AI tools can help students in various ways

      "All this worry about "kids cheating" is overblown in my opinion"

      I have conversations with numerous IT teachers weekly (I can yell at one right now if I want :-P) and they know it's very easy to spot "cheating". It's common that 70% of the students have almost identically formatted syntax while the other ~30% all have different styling. Thus, ~70% of students are using GitHub Copilot or StackOverflow or whatever. 1 of these teachers that works for a state college said this isn't considered cheating anymore :-/. As the article alludes, it's apparently now about the understanding of how the final application is working rather than what builds it. I don't agree with this at all as it's tethered to how the college generates $$$ through funding rather than it's educational merit. That corruption (IMO) aside, if you really do allow students to use Copilot then, only the understanding of the final product is achievable, not the analytical journey.

      1. Michael Wojcik Silver badge

        Re: The boffins say AI tools can help students in various ways

        Pedagogy will have to adapt, just as grade-school mathematics classes had to adapt to successive generations of calculators.

        For example, we're going to need to shift from "write a program" assignments to "explain a program" ones. This is the programming equivalent of "show your work". Literate programming1 is one possible approach: students turn in a combination program and essay, with interwoven text and code, explaining what they've done and why.

        "Flipped class" approaches – where students read or watch lecture material for homework, and work on assignments in class, individually or in groups – can also help reduce cheating, particularly if the instructor/student ratio is reasonable so instructors can spend time with each student on a regular basis.

        Frankly, pretty much every academic discipline is going to need to address this. When I was last in academia doing digital rhetoric, I presented some research on machine essay generation, and pointed out that soon traditional essay assignments would be completely useless. (They already more or less are, for students with the resources to make use of "paper mills" or benefiting from the collections of papers maintained by various student organizations.) Composition long ago largely switched to a show-your-work model with students turning in multiple drafts and revising them based on peer-group and instructor feedback, and will have to continue in that direction. So will every other course of study that involves most sorts of unsupervised intellectual labor.

        1Though using something less arcane and cumbersome than Knuth's WEB system. Love the guy, but he has a fondness for eye-bleeding syntax.

  6. JavaJester

    Potential to move learning up the stack

    Assuming these coding assistants mature and provide actionable advice without ethical quandaries*, it represents an opportunity to reimagine how programming is taught. The instruction could focus more on choosing which suggested solution has the best tradeoff for the problem at hand. Things like how to evaluate O(1) vs O(N) vs O(N²) complexity in suggested code would be appropriate to teach in beginner classes. Topics that used to be considered graduate level or at least Junior / Senior level undergrad could move down the stack to more basic and intermediate level instruction. I believe a focus in this direction would be far more profitable than trying to maintain the status quo by sniffing out "cheaters".

    * Yeah, I know huge assumption. This technology is too useful to be abandoned. I think it is a "when" question not an "if" question.

    1. doublelayer Silver badge

      Re: Potential to move learning up the stack

      "The instruction could focus more on choosing which suggested solution has the best tradeoff for the problem at hand. Things like how to evaluate O(1) vs O(N) vs O(N²) complexity in suggested code would be appropriate to teach in beginner classes. Topics that used to be considered graduate level or at least Junior / Senior level undergrad could move down the stack to more basic and intermediate level instruction."

      Maybe it was just me, but that was taught right after the basics of syntax and structure were complete. Understanding performance is critical and not too complicated, so you can do it early already. The courses where we used more complex algorithms and mathematically proved things about their performance came later, but understanding the practical answers in straightforward algorithms justifiably came early.

      In order to understand the tradeoffs, you have to read and understand the code these tools spit out. Even ignoring the chance that the code you got from the tool is wrong, you still need that level of literacy to parse it. That comes more naturally to students who have written code for themselves and analyzed it, because they need to understand how efficient code is created to judge whether someone else's code is efficient or to improve it.

      1. captain veg Silver badge

        Re: Potential to move learning up the stack

        > you have to read and understand the code these tools spit out

        Thanks.

        That's all anyone needs to know.

        -A.

    2. Peter2

      Re: Potential to move learning up the stack

      This technology is too useful to be abandoned. I think it is a "when" question not an "if" question.

      If it's spitting out a copyrighted code without an appropriate user license which leaves the end user liable then it's a liability rather than an asset if the end user doesn't understand the legal implications.

      Imagine that it used code covered by for instance the GPL, and this output was then used in a commercial project which was released.

      What would be the outcome?

  7. doublelayer Silver badge

    Calculators

    The effect of these tools on computer science courses is similar, though smaller, to the effect that calculators had on mathematics education. For early courses, their existence can basically be ignored. Just as a student could type in the multiplication assignment and get correct answers, these programs can spit out functioning answers to the common introductory problems. Any student who uses a calculator instead of learning the mechanisms of arithmetic is going to have some problems when the assignments get more complicated. They can even use calculators to solve basic algebraic problems, but when they're asked to create an algebraic problem from a situation and then solve it to get a useful piece of information, they won't understand how to do that. In later assignments, the calculator can speed up the process of getting correct answers. Similarly, we're not asking students to write simple introductory programs because there are so many jobs out there needing them, but so that they can learn the concepts necessary to write the more complex programs that AI codewriters can't manage at all. Once they know how to write software, they can feel free to use the AI code writing tools (subject to the copyright and licensing restrictions) if they find it speeds them up.

    In universities, I suggest that we let students fail a bit more than we would otherwise. If a seven-year-old child uses a calculator on their homework, it's worth finding out and helping them learn the concept on their own because it will affect them later. If a student who is much older decides to cheat on the introductory computer science assignments by having a tool create correct answers for them, I will have no sympathy when they get faced with a more complex problem requiring analytical skills, skills that the other students who actually did the homework will have gained and they will lack. This is not as good an approach for computer science taught to younger students though.

  8. Anonymous Coward
    Anonymous Coward

    Learning while earning

    To be honest I would prefer an autopilot that included attribution(s) for the purpose of deeper investigation, and also alternatives. In other words, an improved search mechanism.

    For any moderately complex algorithm, you'd want to know the average order complexity, best/worst case analysis, etc.

    If that kind of information is at your fingertips, and you're reading it, you'll be learning too.

    1. Ken Hagan Gold badge

      Re: Learning while earning

      Ah, but tools like CoPilot literally don't know where they got the code from and no-one really knows how they work. In that sense, I suppose they are like an experienced programmer.

  9. Anonymous Coward
    Anonymous Coward

    Why use AI?

    Getting more useful compiler error messages doesn't need AI. Most of them have typical causes you pick up fairly quickly, and it wouldn't be that hard for compiler writers to add a 'most likely cause' line to the error message. I'd particularly appreciate that with the 400 lines of 'something, something template, misdirect, misdirect' you get for any template error.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like