back to article AI co-programmers perhaps won't spawn as many bugs as feared

Machine-learning models that power next-gen code-completion tools like GitHub Copilot can help software developers write more functional code, without making it less secure. That's the tentative result of an albeit small 58-person survey conducted by a group of New York University computer scientists. In a paper distributed …

  1. Stuart Castle Silver badge

    Maybe Microsoft won't break the printing system in Windows quite so often with AI programming?

  2. Mike 137 Silver badge

    Possible issue?

    I wonder how well this will support code optimisation (supposing anyone bothers any more), as it's quite possible to generate fully functional but non-optimal code that's nevertheless hard or impossible to optimise without major rewrite.

    1. Anonymous Coward
      Anonymous Coward

      Re: Possible issue?

      This is the real problem nobody is addressing. There's a lot you can say about this code wise but, I simply remind people that while optimization may seem to have diminishing returns, non-optimized code has exponential returns. Thus a 1ms delay turning into 1^n can and will literally stop things.

      A small code example mixed with a human preference can be found in Javascript. You see a lot (I mean A LOT) of people using "d.querySelector(id)" instead of the 10x faster "d.getElementById(id)". If a human doesn't really know better, what chance does AI have? Worse, what if the person who uses only querySelector() is the same person who wrote the AI!?!?!?

      1. Anonymous Coward
        Anonymous Coward

        Re: Possible issue?

        The obvious question is if there is an equivalent function that is 10x faster why isn't the interpreter or compiler applying that optimisation?

        1. MatthewSt

          Re: Possible issue?

          Because in this context it doesn't and can't know what you want it to do. The 10x faster one is looking in fewer places for the value, which is why it's faster.

  3. Richard 12 Silver badge

    Seems a very narrow study

    C language, re-implementing something that is part of every OOP standard library seems like something that the AI thing should be absolutely perfect at.

    So the comparison is between typo rates when copy-pasting.

    Surely this is more of an argument for using well-tested libraries and higher-level languages?

    1. Michael Wojcik Silver badge

      Re: Seems a very narrow study

      Yes, it's a narrow study of a somewhat contrived problem. I'm not claiming no one ever needs to write code to maintain and manipulate linked lists in C, but it's not something most developers should be doing.

      This tends to be true of most studies of programming practices and performance. Dan Luu's blog piece on studies of different outcomes with various programming languages is a decent first analysis of this problem.

      Broadly speaking, the industry does not do a good job of studying programming.

  4. Howard Sway Silver badge

    Human written code contains as many bugs as human written code

    Seeing as these "AI programmers" just digest large amounts of human written code, it's a mathematical certainty that they will produce code that contains as many bugs on average as the code they have digested, as all that produced code is itself an average sample of buggy code.

    If it's to be anything more than this, the AI code assistants need to be trained in the process of debugging, by learning to identify common classes of errors, learning how to fix them, and doing this to all code before feeding it into the main code producing engine. Then it might be Garbage In, Less Garbage Out. But debugging is often hard and non linear, and needs real intelligence to work out why things happen, so this is going to be incredibly difficult to simulate algorithmically, which is why I suspect they will try and minimise this downside when hyping up the code churning "co-programmers".

    1. Anonymous Coward
      Anonymous Coward

      Blind Alley, that one.

      People will need to build a better set of training data to improve the output. There is only so far you can get with scraped code, a lesson that Github in particular will learn over and over now that they are already on the plateau of diminishing returns. They don't need more data, they need specific and pristine data.

      Humans can make this, but don't expect them to do if for free, and you won't get more than what you pay for. Fiver coders make Fiver code.

      Training a ML model do non-trivial debugging is one of those tilting at windmills problems. Code optimization is hard, but at least is some cases has deterministic or at least as measurable and comparable results. That gives something to feed your weighting system.

      Debugging is a totally different problem class. If "AI" were real instead of a marketing term maybe. That's not what we have. A ML model might be able to auto-gen boilerplate for your test framework, but it's not able to think, an essential skill for figuring out test cases. Claiming you cracked that one without a working implementation is about as credible as claiming you cracked the halting problem without a proof.

      And we figured out how to autogen boilerplate for unit tests and things like fuzzers without even using ML methodologies, so even less impressive if that's all it's good for.

    2. captain veg Silver badge

      Re: Human written code contains as many bugs as human written code

      > the AI code assistants need to be trained in the process of debugging

      Something like this?

      "It looks like you are writing a linked list.

      "Would you like help?

      " - Let me bugger it all up for you

      " - Just fuck off and leave you alone"


  5. John Smith 19 Gold badge

    In *theory* this improves the mediocre programmers at little cost.

    But (and I'm just throwing this idea out there) WTF don't we figure out better ways to train programmers in the first place?

    I I love the idea of creating bug-free software automatically. *

    But something tells me this is not it.

    *The ultimate test for any of this AI stuff is can you use the tool to maintain itself ? Because an awful lot of this make-the-programmer obsolete/more productive/more accurate stuff never tries to do this.

    Not exactly a mark of approval for how much its developers trust their creation.

  6. Locomotion69

    But in the long term

    Code tends to live a long, long time. And gets modified. Sometimes over and over again.

    Every now and then a human may touch the AI generated code - and may be even understand why it is there, what it is all about and what is to achieved by it. But probably not.

    I am afraid that such code gets partly rewritten, and then AI-ed again, up to a point that neither a human nor AI can make any sense out of it anymore. This is when the software is no longer maintainable by any standard, yet no budget exist for a proper rewrite.

    I see a value in these tools though - I just would not call them AI, as I do not consider these tools "intelligent".

    But then, when writing this post I realize the above phrase may just be as applicable to humans....

    1. Anonymous Coward
      Anonymous Coward

      Hire a better Software Architect

      If the ML generated code is inside large human designed and planned functions, methods, objects, etc that is a manageable problem.

      I suspect if you let the autogen spew out the top level backbone of your project, then hack away at it, and send it back into the code mutilator, you are right. Fun will ensue after-hours as programming moves into the cots in the back of the QA lab and starts ordering pizza with a double helping of "Colombian Basil".

  7. Anonymous Coward


    Since I retired I use virtual keyboards to write words, not code. Their AI checks my spelling and suggests the next word or phrase I might want to use.

    I see this as no different than what is being discussed. It will catch some errors and miss others it will make some good suggestions and some bad ones. But in the end it won't make the programmer program better, just faster.

    Would I trust an AI to write a whole program (or a whole book)? No. But I've known professional programmers that I wouldn't trust either.

    AI isn't ready to take your jobs . . . yet. And it may never be. But it can, when used appropriately, make your life easier.

  8. amanfromMars 1 Silver badge

    The Chosen Few in the Matrix AIReImagineered for Virtual Deployment in Remote Access Trojans ‽ .

    That words create, command and control and destroy worlds is both an undeniable bug and an infinitely vast field of endeavour and experimentation and enjoyment also providing both opportunities and virulent 0day exploit vulnerabilities with extremely convenient universal supply lines endemic in SCADASystems .... Supervisory Control And Data Acquisition Systems ..... and able to be a catastrophically destructive systemic weakness well known and easily silently and stealthily launched against both hostile enemies and self-serving frenemies alike, whether domestic home-grown or foreign and alien, by that and/or those in the know.

    And thus something to fully expect be enthusiastically and exhaustively exercised to the nth degree at all expert levels of engagement.

    However, whereas forewarned is forearmed is usually trotted out as a sort of comfort blanket whenever considering the possibility of attacks in the pursuit of defence against novel and formerly unheard of threats or events, there is nothing usual about rapidly emerging future events nowadays which would allow one to bear effective opposing arms against them.

    To imagine otherwise is hubris confirmed and madness invited out to play in the most destructive fields of Virtual Command and Remote Control, Conflict and CHAOS [Clouds Hosting Advanced Operating Systems].

    1. amanfromMars 1 Silver badge

      Problematic Fake News .... from Whom and/or What and for Whom and/or What?

      And whenever you consider/ponder and wonder on the news being fed to you today in the world that you are living in, is there an undeniably uncomfortable similarity that suggests the truth of the above parallel ....[The Chosen Few in the Matrix AIReImagineered for Virtual Deployment in Remote Access Trojans ‽ .] .... and its likelihood to be that which takes full advantage of your profound ignorance of it with main stream media channels and out of touch and corrupt governments being complicit in mentoring and monitoring and furthering your ignorance in service of servering a now rapidly failing and increasingly self-destructive inducing exclusive executive agenda?

      Be honest now with yourself, give it some thought and try not to tell yourself things are quite different whenever in reality all too familiar and fast breeder reaction grounds for popular revolution and domestic insurrection, for that is what they exactly are, whether you like them or not ..... and that’s where current events are swiftly taking you.

      Q: Ye Olde Worlde Cavaliers vs Roundheads Revisited for a Postmodern Day 0Day Rematch with IT and AI in Alien Command with Remote Virtual Master Control ?

      A: Yes, I suppose it certainly is.

  9. Il'Geller

    Within 3-5 years programming will completely disappear, since the computer can understand human speech and translate texts into a structured format, then independently assign the necessary functions.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like