back to article Perhaps AI is going to take away coding jobs – of those who trust this tech too much

Computer scientists have evaluated how large language models (LLMs) answer Java coding questions from the Q&A site StackOverflow and, like others before them, have found the results wanting. In a preprint paper titled, "A Study on Robustness and Reliability of Large Language Model Code Generation," doctoral students Li Zhong …

  1. Ace2 Silver badge

    Water is wet, film at 11

  2. druck Silver badge

    There's got to be an threshold above which it's easier to code something yourself than correct all the mistakes in the AI generated code, and that threshold is way below the error rates those LLMs are making.

    That's not even taking in to consideration that if by some chance it does generate seemingly working code for the immediate problem, you can understand it well enough to maintain it in the future.

    1. Joe W Silver badge

      Low bar....

      Heck, there's code I wrote that's hard to understand by myself a fortnight later.

      Yes, I eventually did refactor the "gem" I'm thinking of. It still stucks wrt readability. But it is much faster than all other versions that were more readable. No, I don't like this.

      1. Michael Wojcik Silver badge

        Re: Low bar....

        Yes. Source code has two audiences: machines and humans. And the latter includes Future Me.

        Readability is one of the most important factors in code quality for any software that's likely to be maintained, and that includes most non-trivial software.

    2. Anonymous Coward
      Anonymous Coward

      You are aware that ChatGPT can document the code it's written?

      It can also document yours, no matter how long ago you wrote it.

  3. Sceptic Tank Silver badge
    Terminator

    Remember to push you iffy generated code to GitHub so that the bots can get it from there to generate some code for another dev. Repeat.

    Programming jobs are going to migrate to writing unit tests to check if the output from the Llama (assuming output came out of a somewhat-respectable orifice of said Llama) actually sometimes produces workable results. So brush up on your testing skills; that is where the big money will soon be.

    1. ChoHag Silver badge

      A developer's main role in the future will be persuading your manglers that you really ought to at least tweak the AI's code, or maybe just look at it, before it goes to production, and that when production goes down anyway it was not because you mistrusted the machine.

    2. find users who cut cat tail

      No, the big money will be in maintaining old COBOL (and even more obscure) code our society is running on.

      Also, unit tests, despite their utility, do not check code correctness. If your unit test does not go beyond 7 it can easily verify that all odd numbers are primes…

      1. Brewster's Angle Grinder Silver badge

        Isn't that the reason for AI? You train your AI on Cobol and have your scant* and expensive programmers deal with the fall out.

        * We had this discussion the other day. Cobol was designed as an easy language. There's no reason experienced programmers of any modern language couldn't get and up and running fairly quickly. It would not be like the major conceptual challenges of asking a Cobol programmer to manage codebase in C.

    3. Brewster's Angle Grinder Silver badge

      I can see the job increasingly becoming code reviewing AI code. At the moment, it might be quicker to write it yourself. It's not going to stay that way forever.

      Eventually, they'll get good enough it will become writing high level specs and little more than diving into bugs the AI can't solve. We learnt to trust compilers generating the machine code, instead of doing it ourselves. Eventually, we'll trust them to generate the nuts and bolts code, too.

  4. Alan Bourke
    Flame

    Ah yes, code 'written' by not-actually-AI.

    Full tilt to a future of code grey goo where language models hoover up scads of iffy code written by humans and puke it back out as a sequence of alphanumerics that statistically *look* like an answer to a coding question asked by another human.

    It's silly on an autonomous cars are coming next week level.

  5. thondwe

    Testing

    To anyone using an AI tool to generate code, I've got an expensive AI based testing tool to sell you! (SIC!) - might be useful??

  6. Zippy´s Sausage Factory

    Ah yes, people are finally waking up to the fact that AI is just like blockchain and the Metaverse - nothing but hype and terrible products.

    Anyone want to guess when "no AI will be used in the fulfilment of this contract" starts to become standard?

  7. BigAndos

    Useful but heavy pinch of salt required

    I’ve found chat GPT is a useful tool for answering very specific questions. For example I’m rubbish and regex and it definitely helped me learn a few things. Having said that if I ask it more complex questions it often gives answers that just don’t work (~30%), and you need to have a very clear idea of what you want your code to do and how you want it to be structured before you get sensible answers to your questions.

    I’ve tried GitHub copilot too - about half of the time it’s suggestions are really useful and half the time absolutely no use at all

    1. Michael Wojcik Silver badge

      Re: Useful but heavy pinch of salt required

      Certainly it's a good way to avoid learning anything, improving your own skills, or serendipitously encountering something else of interest.

      Delegating your work to a machine has essentially the same advantages and disadvantages as delegating it to a stranger of unknown capabilities and motives.

  8. ChrisC Silver badge

    "In zero-shot settings, Llama has the lowest API misuse rate. However, this is partially due to [the fact that] most of the Llama answers do not include any code."

    Yes, I also find that not writing any code in the first place has a beneficial effect on the number of errors I make...

  9. deceptionatd
    Coat

    Did anyone try having the Llama write some Perl?

  10. Diogenes

    Not AI but SI ?

    Where SI stands for Simulated Intelligence

  11. msalot

    AI Not ready for prime time coding job

    AI might be able to do this but it will take a ton of accurate specifications of the desired code to produce an accurate and complete result. I have used AI for a few relatively simple coding exercises and it has improved my code, but to produce a complete solution is still far out of reach.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like