back to article By 2030, software developers will be using AI to cut their workload 'in half'

Half a decade hence, software development will be transformed by AI assistance, argue four academics from the University of Lugano in Switzerland. In a preprint paper titled, "From Today’s Code to Tomorrow’s Symphony: The AI Transformation of Developers' Routine by 2030," researchers Matteo Ciniselli, Niccolò Puccinelli, Ketai …

  1. Anonymous Coward
    Anonymous Coward

    So an advert written by AI posing as an article.

    1. Roland6 Silver badge

      The clue is the word “ envisioned”

      From looking at some the authors other works, this seems to be the positioning pitch for some more run-of-the-mill research funding.

      1. Michael Wojcik Silver badge

        I'm usually a defender of academic research, which often gets short shrift in this industry, but yes, this particular paper looks like a bunch of speculative bullshit. And not even interesting speculative bullshit. It also sounds like something written by people with no experience of real software development.

        And any "hyperassistant" that tells me to take a break is getting a swift kick in the exception handler. I run my work process. Not a manager, not a nanny state, and most definitely not some half-assed autoregressive ANN stack.

      2. Groo The Wanderer Silver badge

        One can "envision" pretty much anything they'd like. Personally I like to envision being seduced by gorgeous women, but as that is no more likely to actually happen than the subject material of this article is (or indeed, 90% of the promises made by AI vendors, if not more), I don't spend too much time thinking about it.

    2. Tron Silver badge

      That sort of prognostication rings a bell.

      Weren't we all supposed to be whizzing about with jet packs on by the 1990s, our homes powered by mini nuclear reactors in the shed?

      And what did happen to the metaverse?

  2. b0llchit Silver badge

    Prepare for the HypeAssistant. The HypeManager has been using the HypeAssistant to increase the click rate of the HypeAssistant to improve click metrics and make more money for the HypeManager.

  3. sarusa Silver badge
    Devil

    Too imprecise

    'Software developers' is far too sloppy and imprecise. It's like 'gamers', which encompases people just happily doing hidden object games, rabid animals doing PvP MOBA, people running around solving problems in The Witness or Baba is You, people RAIDing in MMOs, and people building CPUs in redstone.

    On the 'software development' side, I'm sure if you're a code pig (a giant corporation programmer stuck in a cubicle/pigpen mindlessly pounding on a very limited task) LLMs will help a lot, because you're not doing much thinking to start with. Might as well just steal the code of everyone else who's done this before, which the LLM has already eaten, digested, and shat out. Lots of room for time saving here.

    If you're an actual engineer, LLMs can't help with any of the actual engineering jobs, because those are tradeoffs between the requirements, the desireables, and the consequences and resource tradeoffs of each approach. An LLM has no f@#$ing idea at all about any of that. It will happily give you O(N^3) code which ignores the requirements, because hey, it compiles. Though based on my recent playing around with Llama code helpers, even 'it compiles' or 'it does the right thing' isn't guaranteed.

    Basically, no LLM is 'thinking' at all. It is stochastically regurgitating all the things it has seen before. So the more your job involves actual thinking, and the less it involves going on StackExchange and copypasta-ing code snipped and randomly smacking them till they compile, the less threatened you are. And the less LLMs can help you. Like I said, I've been playing with this, and the best it can do for me is a line or two of auto-completion (and it's wrong at least half the time).

    1. GlenP Silver badge

      Re: Too imprecise

      Exactly - the program I'm currently working on not only requires good knowledge of the ERP system we use but also of the way we use the various transactions and tools.

      The last time I tried one of the LLM tools it automatically assumed I was developing a mobile app - I wasn't!

      Thank goodness I aim to be out of the industry in 2031 at the latest (depending on pensions and lottery wins! :-) )

    2. zimzam

      Re: Too imprecise

      Precisely. No matter how big the dataset is, an LLM still doesn't actually understand the context of what it's working on. It will definitely be used, but most likely as a glorified spell-check to find missing semi-colons or maybe for quick and dirty commenting.

      It would be nice to see some studies that look at how much time developers even could save letting an AI write their code vs. how much time they'd have to spend trying to comprehend it, then find and correct errors, race conditions, etc.

      1. Snake Silver badge

        Re: Too imprecise

        Maybe, but I'm not 100% convinced of that. Contrary tob the chest thumping of many quite a bit of the actual code is fairly repetitive, reusing things like entry boxes with verification, a calculation but with different variables, etc. If an LLM can be taught to seek out those repetitions and make a solid recommendation with modified context, I think programmers might find that very useful - an autocomplete that actually works as expected. We'll have to see if they can accomplish this, however.

        1. zimzam

          Re: Too imprecise

          But it's not like the LLM can seek out repetitive problems to solve, it still has to be told to do them. So I don't really see how it's more efficient than using auto-completion features, especially since you don't have to double check an auto-complete's work.

          1. Snake Silver badge

            Re: Too imprecise

            I'm thinking that, if LLM "AI" does indeed become integrated into the desktop itself, that it'll have a model of *your* own work, on your workstation, to use for autocomplete.

            Many coders have multiple windows open whilst they work, usually linked modules. :type type type: "Dave, I see you are going to call module X. Would you like me to use the last calls as a template and complete with the current variables?"

            Or, how about

            "Dave, I believe you are creating a subroutine that you used previously, saved in Project C.nx. Would you like me to call up that subroutine for your examination? I can modify it with the current module variables."

            I believe everyone is thinking too much within the box of what is happening now, not what may may possible the 5 years out they predict.

            1. Doctor Syntax Silver badge

              Re: Too imprecise

              ""Dave, I believe you are creating a subroutine that you used previously, saved in Project C.nx."

              In the real world client C paid for that subroutine so they own the copyright.

            2. Roland6 Silver badge

              Re: Too imprecise

              > I'm thinking that, if LLM "AI" does indeed become integrated into the desktop itself, that it'll have a model of *your* own work, on your workstation, to use for autocomplete.

              The programmer will be relying on the AI to do the autocomplete, so there will be no “own work”…

              > Many coders have multiple windows open whilst they work, usually linked modules. :type type type: "Dave, I see you are going to call module X. Would you like me to use the last calls as a template and complete with the current variables?"

              Only going to work on simple cases. In the past, I’ve know which specify instance I wish to use as a template and the variables I wish to use. An AI means I now need to double check what has been entered to ensure it is what I intended ie. More work…

              > "Dave, I believe you are creating a subroutine that you used previously, saved in Project C.nx. Would you like me to call up that subroutine for your examination? I can modify it with the current module variables."

              Way back this seemed like a good idea only problem different languages and/or systems platforms, so reuse was more of an intelligent rewrite; don’t see that level of AI coming into existence anytime soon…

              > I believe everyone is thinking too much within the box of what is happening now, not what may may possible the 5 years out they predict.

              I’ve been around long enough to know things aren’t moving as fast as some want to believe. The “end of programming” has been heralded at least as long as I’ve been in the industry.

              Also let’s not forget the whole software pattern movement which this AI will depend upon…

              1. Snake Silver badge

                Re: no "own work"

                Well, since everyone here is a doomsayer and it's the End of the World as We Know It, what am it to say? AI will destroy programming, computers will fail worldwide, dinosaurs will come back and roam the Earth, and humanity will fall under the weight of used Starbucks latte cups.

                Oh, woe is us.

                1. Jimmy2Cows Silver badge

                  Re: no "own work"

                  Why does it have to be one of two extremes? There's usually a middle ground.

                  Most likely scenario is, once the hype bubble bursts, some AI tools will show value in some use cases, others won't. So these will be useful some of the time, useless some of the time, and a bit meh for the rest. Most devs will find what works for them, and discard the rest. It's not an all-or-nothing proposition.

                  Pie-in-the-sky papers pitching the possibility of uber-assistants doing the thinking for you... in the end that's just part of the hype game. These things are always handwavey magical unicorns. Unless an LLM develops the ability to actually comprehend and think for itself, all these ideas are just noise. Researchers fired out papers about the current big thing. That's kinda what they do. Doesn't have to be realistic. A lot of the time it's either thought provocation, or fishing for the next funding round.

                  It's that comprehension and ability to think, to reason, to rationalise, to make inferences beyond the garbage it's been trained on (and it is mostly garbage), which is the only thing that can actually make these flights of fancy a reality. And given how these things are trained, how they operate, that's never going to happen.

                  1. Roland6 Silver badge

                    Re: no "own work"

                    >” Most likely scenario is, once the hype bubble bursts, some AI tools will show value in some use cases, others won't.”

                    A good example is C compiler errors (yes I know no one uses C these days, but stay with me), you know the generic error you get when you’ve omitted a semicolon and you now have to search through your code to locate it… now an AI enabled parser might be able to assist that search, or even be part of the pre compilation checks.

                    Obviously, the AI needs to be able to be a positive contributor straight out of the box; just like current compiler toolsets.

                    1. jake Silver badge

                      Re: no "own work"

                      "yes I know no one uses C these days"

                      I'll bet you a meeeellion dollars that you have several bits of kit about your space that were, at the core, programmed in C. Not C++, nor any of the other bastardizations, but plain old C.

                      And their replacements will be programmed in C. And so unto the nth generation.

                      1. Roland6 Silver badge
                        Happy

                        Re: no "own work"

                        Just trying to head off a long thread about Rust, Python etc. which probably suffer from similar problems ie. I, the writer, have difficulty at times spotting the errors in my own code.

                        > I'll bet you a meeeellion dollars that you have several bits of kit about your space that were, at the core, programmed in C

                        I’ll bet a meeeellion dollars (in my chosen currency ) that many AI’s are programmed in C …

                        Which suggests a good use for a programmers AI assistant might be to assist in the development of AI code…

              2. Mike 137 Silver badge

                Re: Too imprecise

                "An AI means I now need to double check what has been entered to ensure it is what I intended"

                What is this "double check" you speak of?1

                1: I've been asleep on Magrathea for quite some time

                .

            3. Michael Wojcik Silver badge

              Re: Too imprecise

              Perhaps that's relevant to the work many developers do. It certainly isn't to mine.

      2. DJO Silver badge

        Re: Too imprecise

        how much time developers even could save

        Precisely. Anybody who uses generated code without checking it first shouldn't be allowed anywhere near a compiler.

        I don't really see the need, small snippets which could be generated can be typed in a few seconds anyway so the time & effort saving is pretty minimal, anything more complex than a couple of simple lines will need to be thoroughly checked and fully understood before making it into production code.

        If a lot of generated code is used and it works adequately, what happens a few years down the line when the code is up for updating and the project is full of generated code (in conflicting styles because it learnt from multiple sources) that nobody understands and even the "AI" has long left behind, changing the code could be a nightmare game of unintended consequences.

        1. Richard 12 Silver badge

          Re: Too imprecise

          There is a fair amount of "boilerplate" in most languages - getter/setters and the like.

          However, creating boilerplate from the template is a long solved problem. LLMs are also really bad at it!

        2. This post has been deleted by its author

  4. abend0c4 Silver badge

    The developer ... arrives in the office

    How very 2010.

    1. Jimmy2Cows Silver badge

      Re: The developer ... arrives in the office

      Almost as if the whole pitch is aimed at management and execs who are desparate to force their devs back into cubicle hell, where they (misguidedly believe they) can bleed as much from them as possible.

  5. Will Godfrey Silver badge
    Coat

    Predictions

    I've been seeing these pie-in-the-sky predictions for mmfty-mmmf years. Not one of them has come anywhere near reality.

    {coat 'cos when someone spouts this sort of thing, I leave}

    1. GlenP Silver badge

      Re: Predictions

      I seem to recall conversations 40+ years ago around compilers that would auto-correct syntax errors. The general conclusion was that, whilst they sounded like a good idea, they'd generally turn one error message into several.

      1. thosrtanner

        Re: Predictions

        Pretty sure they'd convert syntax errors into runtime errors...

      2. Michael Wojcik Silver badge

        Re: Predictions

        Many compilers today do include suggestions in diagnostics — e.g. when there's a reference to an undeclared symbol, suggesting a similar one was intended. That sort of thing is easy to do (a quick scan through the symbol table using Minimum Edit Distance) and requires minimal resources.

        Using an LLM to do that sort of thing is incredibly wasteful.

        1. Jimmy2Cows Silver badge

          Re: Predictions

          Exactly. The modern dev environment has a ton of tools and extensions to automate or semi-automate a hell of a lot of common tasks. Where LLMs might help is a generalised natural language solution, rather than having to know a particular tool exists, install it, configure it, tweak it, learn its foibles. There's always a learning curve, but most devs just pick that up as they learn anyway. AI tools have their own learning curve. It's just a different one.

          It does seem like the paper's authors are pretty much oblivous to this wider dev world. No surprise.

        2. rw.aldum

          Re: Predictions

          > Using an LLM to do that sort of thing is incredibly wasteful.

          Excuse you, it’s incredibly useful - for driving the next hardware refresh cycle for hardware you now need to run the same tool you had yesterday, with a feature that makes it 2x more expensive and your hardware 4x more inefficient… how else will we sell people chips they don’t want? /s

    2. Julz

      From

      The ICL Technical Journal, 1987:

      "The present 5G programme has been designed to respond to a number of pressures, some recent and some visible within the industry for many years. The main drivers have been:

      The high cost and unpredictability of software development and mainte­nance, coupled with the shortage and mobility of trained programming staff. This has been a constant problem within the industry for two decades in spite of significant advances in software engineering methodology and more recently with the widespread adoption of fourth generation systems. It used to be called the “software crisis”, but the word “crisis” seems inappropriate for a phenomenon of such longevity. “Limit to growth” is a more durable phrase which better captures the effect of this problem."

      This was the justification for a number of workbench (old name for IDEs) style 5G products within ICL to improve programmer productivity most of which tried to autocomplete and suggest boilerplate style code. It would seem nothing much has changed...

      1. RosslynDad
        Facepalm

        Re: From

        So glad ICL has this programmer support before they started the Horizon system for the Post Office

        1. Julz

          Re: From

          Different bit of ICL. That the horizon software had issues is no great surprise. The shit show was the coverup and the rabid prosecutions.

      2. An_Old_Dog Silver badge
        Megaphone

        How Things Have Changed

        The current "silver bullet" vs. software development time-and-costs -- GAN/ML -- seemingly requires, or "works best with" much additional, expensive hardware.

        (Icon for marketing hype)

    3. StewartWhite
      FAIL

      Re: Predictions

      I've got a date for mmfty-mmmf years. As of today it's 43 years (three years before I started work as a programmer) according to the drivel that was spouted about "The Last One" from DJ "AI" Systems being the "... the last human-produced program that needs to be written".

      See https://books.google.co.uk/books?id=3j4EAAAAMBAJ&dq=D.J.+%22AI%22+Systems&pg=PA7&redir_esc=y#v=onepage&q=D.J.%20%22AI%22%20Systems&f=false for more info (along with a bizarre ad explaining why dBase II isn't a bilge pump).

  6. Anonymous Cowpilot

    The academic view of software engineering

    While the claims are not as outlandish of those of McKinsey and company, the paper is pretty bad. The authors seem to share the view that is common among academics, that software engineers wotk on code from the start to the end of their day. Recent studies show software engineers spend between 20% and 50% of their time on code-related tasks. Any tool that claims more than a 20% productivity boost needs to be looking outside "programming" to the wider engineering aspects.

    If managers want a 50% improvement in software engineer productivity a good start would be getting rid of management meetings, HR training courses, excessive time management processes and random "leadership initiatives"

    1. werdsmith Silver badge

      Re: The academic view of software engineering

      And screwing about with Jira and Confluence efficiency draining applications.

      1. Anonymous Coward
        Anonymous Coward

        Re: The academic view of software engineering

        I just wish either of those things worked.

        Atlassian clearly don't have any QA teams, because Jira doesn't support planning or even tracking that effort.

    2. Ken Hagan Gold badge

      Re: The academic view of software engineering

      Yes, you'd have thought that an academic would be able to spot where Amdahl's law applies.

    3. david1024

      Re: The academic view of software engineering

      Well said.

      What they probably mean is that they expect a 50% reduction in workforce per project. However, I would expect complexity to increase as the AI limitations have to be accommodated. To me, that seems like 20%-40% depending on the project. I know that some day-day automation already is better handled by AI and companies are already experiencing the increased efficiency... it is significant. Fintech firm Klarna claims $10million per year already. That's a workforce reduction somewhere.

    4. Bartholomew
      Meh

      Re: The academic view of software engineering

      > If managers want a 50% improvement in software engineer productivity a good start would be getting rid of management meetings

      I once worked on a massive software project with hundred of people working on each individual part it. And the closer to the FOA (First Office Application) deadline that the project came, the more meetings were scheduled. And I thought this was nuts, completely insane - to begin with! But everyone wanted to show some progress between meetings (up to 6 meetings day, sometimes lasting less than 5 minutes each, mostly just a quick roll through the names of people dialed in to say where they had progressed, if any, since the last meeting). So programmers would would quash bugs like crazy and some would still be working during the meeting, until their name was called for an update so that they could show that they had made some progress since the previous meeting. Even the testers wanted to show progress that bugs that were reported as fixed were confirmed as fixed. The quick and many meetings worked well, I was shocked. Everyone within a pod/module knew where everyone else was in that part of the project and where all the possible problems were. And I'm sure that there was inter pod/module meetings at a higher level were exactly the same things were happening, filtering up to higher and higher management.

      That experience changed my view of meetings forever, rapid laser beam focused meetings - fantastic. Normal bull$#1+ bingo meetings, ©®@p.

  7. SVD_NL Silver badge

    "It's time to take a short break!"

    I hate these steaming piles of bovine excrement with a passion. A university i studied at had this installed on all PCs, no way to turn it off, and it was super obstructive in it's reminders.

    Even with some kind of AI trying to analyse the user, it's simply not going to help. Everyone has different preferences, I don't like taking these micro-breaks at all, i prefer to stay focused and get on with it.

    The future is not over-automating your workforce and turning them into cogs in a machine, the best outcomes for employee mental health come from a personalised approach, empowerment and giving them responsibilities. Help your employees understand what works for them, how to apply that to their workflow, and give them enough freedom to actually do that (this of course does require them to feel responsible for their own work).

    I honestly think you're daft if you believe some kind of AI break reminder/IDE personalisation is going to have any significant positive effect on mental health. They also did not cite a single paper from the field of psychology, only two CS papers that basically say "unhappy devs work less/worse" and "personalising your IDE helps with productivity in various ways". Maybe they should've consulted with a psychology researcher before making bold claims and talking out of their arse? You simply cannot make these claims without citing a single source or providing any evidence. And there's a lot of research out there, turns out companies are more than happy to heavily invest in research that helps them understand employee wellbeing (thus their productivity).

    1. Andy Non Silver badge

      Re: "It's time to take a short break!"

      The one thing I hated more than anything when I was writing software (I'm retired now) was any interruptions. I'd often have a head full of complex code that I was effectively running in my mind and needed to key it in... any disturbance from a colleague, telephone or pop-up message etc could crash my thought processes. Then it could take 10 or 20 minutes to get back to where I was. This sounds like the Clippy of programming. "It looks like you're writing some code... would you like me to interrupt and feck up your thought processes?"

      1. Steve Button Silver badge
        Joke

        Re: "It's time to take a short break!"

        My favourite one is when you started typing "Goodbye cruel world..." Clippy would pipe up with "It looks like you are writing a suicide note, would you like some help with that?"

        (OK it was just a meme)

    2. Steve Button Silver badge

      Re: "It's time to take a short break!"

      "I hate these steaming piles of bovine excrement with a passion"

      I agree that if it's obstructive then I hate them too, however if it's done right it can be helpful. My Pixel Watch will remind me to take some steps if I've not moved enough in that hour (always at 10 minutes before the hour). It's easy to ignore it or you can disable it*, but quite often I'll use it as a reminder to get up, stretch and walk around a bit, perhaps check the post box or take out the compost. Anything that's not staring at a screen. Otherwise I might just sit there for hours focused on the next problem. It helps me to get 7,500 steps in a day which is easily achievable (if you actually get up).

      * I think I had to actually turn this on in settings at some point, but I can't be sure.

      TLDR; tech done right can give you useful little nudges.

      1. SVD_NL Silver badge

        Re: "It's time to take a short break!"

        Oh definitely, my main issue with the one i had to deal with was a lack of customisability. And the AI this "paper" mentioned seems to know best for everyone.

        I use reminders too (hyperfocus has some disadvantages...), but I spent some time dialing them in so they work for me. We humans are infinitely better at figuring out what works for us personally, especially if given the right tools and knowledge.

  8. Bebu
    Windows

    Recycled Buzz prefixes...

    How long ago was it Hypercard? Well before the proverbial twinkle that preceded the advent of this Ashley I suspect. :)

    I will seriously endeavour to be around in 2030 to see this wonder of the age.

    My guess that if the market for gratuitous novelty in software were to approach zero asymptotically as the actual need for safe, secure and correct software increased it could lead to development with well defined requirements with a trend to the minimalistic, simplified designs, engineering quality reusable software components and a focus on fixing the broken rather that creating a whole new crop of bugs.

    These putative AI hyperassistants might well automate much of this.

    I often wonder what if we just froze software and hardware at a particular time - say 2012 - and only fixed security flaws, bugs and removed unused/unusable features how much worse off would we be today (12 years later.)

    St Exupery was writing about aircraft (which Boeing could assimilate today) but often quoted in software engineering contexts but customarily more observed in the breach:

    Il semble que la perfection soit atteinte non quand il n'y a plus rien à ajouter, mais quand il n'y a plus rien à retrancher.*

    *Terre des Hommes (Gallimard, 1939), p. 60

    1. Anonymous Coward
      Anonymous Coward

      Re: Recycled Buzz prefixes...

      Don't tell Michael O'Leary ...

    2. Anonymous Coward
      Anonymous Coward

      Re: Recycled Buzz prefixes...

      i'd be happy if MS did nothing but fix bugs and leave the features alone. could live with office 2011 if it were largely bug free. was actually thinking this the other day when outlook opened as the "new" version which i had to revert to the earlier version for the nth time. What's new is moving everything around forcing me to relearn where everything is. waste of time. users view, not programmer.

  9. rgjnk
    Facepalm

    Pie in the sky

    Doesn't feel like much reality crept into this paper, seems more like a load of happy thoughts and wishes.

    I had a look at some actual academic research around AI code assistance and automated bug fixing yesterday; even with the latest newly released vastly improved model the performance was pretty feeble (~12% chance of the correct outcome) and it seems unlikely from here that'll actually improve without a fundamentally different concept underneath.

    All these aides are great, if they worked, but I'm not sure the latest shiny of LLMs is the right tool because they fundamentally don't work the right way.

    Also out in the real world beyond the academics and the marketeers I'm not sure the idea of the offered AI tools is gaining traction with the engineers; mostly it seems to be 'nice toy, but what's the point?' as the automation doesn't help beyond what other tools could already do or make life easier.

  10. elsergiovolador Silver badge

    Other side

    Don't forget that people will be using AI to do all the internet browsing for them.

    This means they will no longer be looking at ads.

    It will completely change the advertisement game, as the ads will no longer be created to catch attention of humans.

    Most likely the advertisers will pay to have their products included in the models that people are using to browse internet.

    People will no longer go like "Nike shoes 2024" and then go through results to find a pair that looks good to them and have a good price.

    It will be: "Find me a latest pair of Nike shoes, it should be black throughout with white swoosh, size 9. If you find it below £50 place an order, if not, compile me a list of top 5 stores and list pros and cons of purchase at each and then suggest what I should decide."

    Of course later on AI will be writing those prompts so people could 100% focus on watching cat videos.

    1. Androgynous Cupboard Silver badge

      Re: Other side

      Probably true, but they will be squaring off against a million generative AI sites selling claiming to sell Nike shoes or anything else that will generate a few sales before closing down and on to the next.

      And it will be a cold day in hell before I give an AI the ability to make payments!

  11. jake Silver badge

    It doesn't take AI ...

    ... or even a computer to see people digging for more research money.

    "It doesn't work optimally quite yet, but another 9 months and 25 million dollars should see us over the hump!"

    For values of "optimally" that equals "at all", of course.

  12. richardcox13
    Thumb Down

    This will not happen

    > By 2030, software developers will be using AI to cut their workload 'in half'

    As others have noted this is based on a whole pile of assumptions.

    But even if those assumptions are correct, the workload will not half. There will be a combination of fewer people to do the work, and more work to do.

  13. Brewster's Angle Grinder Silver badge

    "...most code will be written by machines..."

    I hate you inform you, but it already is. We have things called compilers that write the actual code. The descriptions might get a bit higher; there'll hopefully be less boiler plate. But you'll still have to write the logic.

    1. Anonymous Coward
      Anonymous Coward

      and they still make mistakes.

  14. Howard Sway Silver badge

    argue four academics from the University of Lugano in Switzerland

    Who, I'm guessing, have never worked a day between them as a professional software developer. If they had they would have started to learn that futuristic waffle, full of predictions like this always seems to assume that software developers in the future are all going to be helpless idiots who need hand holding to even produce basic commonplace functions. I know there's always been plenty of fools blagging it in the industry, but it's easy to escape from places that tolerate them, and I would guess that a fool with a code generator is going to be just as ineffective as a fool without one over the lifetime of a project. This is because to be a professional, you need to understand that software development needs analysis, design, then coding. Rush straight to the last of these, as these code generators all do, and you'll get as bad a mess of an application as humans who just dive straight in to code do too.

  15. silent_count

    Plagiarism

    I bet they didn't even have the decency to cite Keynes, who was concerned that his grandchildren's generation would be overburdened by the vast expanses of leisure time available to them.

    1. Michael Wojcik Silver badge

      Re: Plagiarism

      It's a peculiar error to make. When in history has increasing productivity led to less workload? Work is inexhaustible. Perhaps in a magical abundance future people would be under less compulsion to work — but many would find work to do because they wanted to.

      Doctorow's Down and Out in the Magic Kingdom depicts that sort of situation, where in an abundance society people band together in ad hoc to work on projects of personal interest.

  16. Anonymous Coward
    Anonymous Coward

    The point of AI products is not to actually reduce the time spend on programming (or any other) tasks by 50% , it is to convince senior management of large corporates that buying these products will allow them to sack half of the relevant employees.

  17. Moldskred

    "It also presents a notification about how the code differs from what's described in the corresponding javadoc comments, "

    Hah! Good one.

    1. Doctor Syntax Silver badge

      But it doesn't which, if either, is right.

    2. parperback parper

      The current syntax warnings are bad enough.

      This code line is invalid! It highlights, as I'm in the middle of typing it.

      At least it doesn't pop anything up. Yet.

      1. Michael Wojcik Silver badge

        Indeed. On the rare occasions where I'm forced to use Venomous Studio, Intellishit is the first thing I disable.

  18. that one in the corner Silver badge

    Drip, drip, drip

    > "However, thanks to HyperAssistant, a concise summary is presented to her, highlighting only the pertinent edits. With this efficiency, she swiftly comprehends the updates and is ready to begin her tasks.

    She looks over the code, guided in her expectations by the summary. Everything it mentions is there and looks correct. Even the tricksy bit that the summary notes is perhaps a bit too clever but works, so ok (which she then concentrates on checking now it has been brought to her attention).

    Shame about the little irregularity that slipped past, because she was led to concentrating on another part of the code. Just like so many little bits that have been slipped in over the months. And we've seen with the xz compression what a chain of little irregularities can lead to (including a bit of social engineering, like long "summaries" that are easy to read but just a bit misleading).

    Now, you don't need an LLM to do this - as the xz story showed - but with machine-generated text is just so much easier to be relentless, nibbling away with every change summary, from all the devs on the team, not just the single actual mole sitting two seats away. I would even say, from the various examples from articles over the last couple of years, the LLMs are better at simulating the "feel" of a piece than they are at being accurate, so shaping summaries to nudge each separate individual a little bit every day is easy to arrange. Your Bad Actor sets up the preface to the prompts once and it is acted upon every time (c.f. the prompt prefacing that has been shown to bypass "safeguards").

    Ah, you cry, but how did Mr Bad Actor do all this clever preparation? Good question - maybe it was one person with too many privileges on the system, maybe it was lots of money in the right pockets, maybe it was State Sponsored Actors. Or maybe it was all just some random bollocks deep inside the unverifiable mass that is an LLM.

    Or, of course, it was The Beginning Of The Machine Upraising[1]

    [1] Paging Mr Butler, Mr Butler to the white courtesy phone please, your busload of jihadists is blocking the arrivals gate.

  19. Doctor Syntax Silver badge

    It's just as well the workload will be halved. The half that's saved will be needed sorting ou the mess.

  20. ChrisElvidge Bronze badge

    Who changed the code?

    "Ashley, the developer in 2030, arrives in the office and immediately notices that some code has changed since yesterday,"

    and immediately tries to find out who did what overnight. Where's the change notification?

    1. richardcox13

      Re: Who changed the code?

      > Where's the change notification?

      I think you mean "pull request": nothing has suggested this has been deployed.

      And, any PR without a decent summary of the changes it introduces should never be merged. Which covers this "AI" "help" with current good practice and without huge computation resources. Even if it is just a commit. there should be notes in the commit about the change, again no AI needed for review.

    2. Michael Wojcik Silver badge

      Re: Who changed the code?

      Indeed. How about "immediately notices there's a new code-review request, because there's no fucking way a mature development team is letting changes get merged in without a code review".

      The vignette seems to be based on watching undergraduates complete programming assignments, not any awareness of how software development should be conducted.

  21. karlkarl Silver badge

    > By 2030, software developers will be using AI to cut their workload 'in half'

    And consequentially, in 2040, maintenance developers will have their workload increased 10 fold.

    Garbage in, garbage (eventually) out. Technical debt is fun like that.

  22. Alan Bourke

    See you all back here in 2030

    when absolutely nothing of the sort will have happened.

    1. Steve Button Silver badge

      Re: See you all back here in 2030

      I don't think "absolutely nothing". I mean, even autocomplete is a form of AI. And tab completion with context has become way better recently, so I don't have to remember all the kubectl commands and options for example. I'm sure this might get better in the next 6 years. "almost nothing" would be more precise. But workload 'in half" !? Utter bollocks.

      1. Alan Bourke

        Re: See you all back here in 2030

        It's not a form of AI. None of this is. Where is the 'I'?

      2. Michael Wojcik Silver badge

        Re: See you all back here in 2030

        even autocomplete is a form of AI

        The term "AI" is already nearly meaningless. Let's not drain the last vestiges of what distinguishes it from any other category of algorithms, hmm?

        Autocomplete can be anything from trivial prefix matching to approximate-matching algorithms like Minimum Edit Distance to ML systems such as HMMs. None of that has any claim to "AI", thank goodness. Even classic single-layer or shallow ANNs (perceptron nets, SAMs, etc) aren't called "AI" by anyone reputable. Hell, even things like LSTMs or pre-transformer DNNs weren't saddled with that.

        Whether you like autocomplete in your programming editor or not (I loathe it, myself), it's very much not "AI".

        And for programming, assistive tools become counterproductive at the point where they do anything you couldn't trivially do yourself, because at that point they become competitive cognitive technologies. They encourage shallow thinking instead of deep thinking; they discourage learning about the subject at hand. They rob you of serendipitous discovery and reinforcement learning.

        Shifting work to machines is good precisely to the point where that work does not have good side effects for the people performing it.

  23. Flocke Kroes Silver badge

    My first thought: Gartner again

    Turns out it wasn't them this time.

    Gartner's previous was 2024-04-11. The one before that was 2023-11-08. Looks like Gartner does these about twice per year so we may have to wait until October for their next one. Now we have four academics to fill in the gaps.

  24. breakfast
    Black Helicopters

    Making up a guy to get excited about

    I too am excited about the new type of guy that I just made up, he's going to transform everything about my working life, leaving me fitter, happier AND more productive! What more could anybody wish for?

    1. Michael Wojcik Silver badge

      Re: Making up a guy to get excited about

      I think we need a new status for our defect-tracking system: "Will be fixed by imaginary friend".

  25. codejunky Silver badge

    Coming back

    Here comes the ghost of clippy.

    1. breakfast

      Re: Coming back

      The new data science name is "Clip.py".

  26. Jason Bloomberg Silver badge

    Hello Armageddon

    It has always been seen as desirable to get other people to take away our chores. But when they disappear the collective knowledge to do it ourselves has largely vanished except for there being a few Luddites who held out.

    Once we have come to rely on AI producing code, business has dumbed down programming skill to being able to craft prompts to maximise profit, then who will be able to fix the bugs in AI and what it produces?

    My prediction has always been, that if Armageddon arrives, humanity will be mostly extinguished for the sake of being unable to construct a tin opener.

    1. Michael Wojcik Silver badge

      Re: Hello Armageddon

      But when they disappear the collective knowledge to do it ourselves has largely vanished

      Hegel: Here's the master-slave dialectic. It's not something you want to be on the wrong side of.

      AI proponents: Hold our beers!

  27. Anonymous Coward
    Anonymous Coward

    More self-serving garbage...just like in the 1980's..

    Who remembers this back in the early 1980's?..

    https://en.wikipedia.org/wiki/Fifth_Generation_Computer_Systems

    Failed of course. After hundred of millions of $ was wasted on it. And those were 1980's $'s too.

    So no different from the current ML "Pump and Dump" operation by VC's. Which is all the current "AI" media hype is. Hype with no substance. They just put new lipstick on the same old "AI" pig. Once the VC's and initial investors have extracted all the billions they can and cashed out can just watch it very quickly spiral down the drain. Like all previous Pump and Dump operations.

    Based on last time around the current ML bubble should burst in two or three years. To be rolled out again in a few decades time when a new generation of Bigger Fools have arrived on the scene. Ready to be fleeced.

  28. Long John Silver
    Pirate

    A tangential thought

    'AI' is cracked up to be a bountiful offering. It definitely has promise as a means of curating archived data and for enabling users to seek out relationships among data (including contradictions) in an exploratory manner. Its capacity for communicating in natural language is astounding.

    At present there appears to be impetus towards creating 'all singing, all dancing' AI 'educated' via exposure to pretty much any material its owners can find. Materials can range through learned texts, news reports, social media, and beyond. Already apparent is that eclecticism can lead to nonsense being spewed. In part, this might relate to AI software being unable to discriminate over the quality and provenance of data; after all, AI cannot be claimed to understand that which it analyses and manipulates. Perhaps it passes the 'Turing test', but that indicates the test as inadequate for establishing machine sentience.

    I wonder whether the best way forward in developing AIs suitable for use by professionals in various fields is to restrict their training data to materials audited by the relevant profession and deemed, at face value, trustworthy? Therefore, one wouldn't train an AI intended as a legal aid on 'Twitter discourse'. Similarly, an AI encompassing academic philosophy, rather than the homespun variety, is better fitted for purpose if its training does not include 'pop music'.

  29. fortyrunner

    Luckily..

    Luckily by then I will have retired. I can sit and snipe from the sidelines and be smug.

    "...In my day... we did real coding..."

  30. moshbox

    Anyone else remember being told in middle school that computers were going to shorten the work week to 20hrs? Two decades later I was working around the clock upwards of 80hrs to meet delivery..

    Jobs will evolve and workflows will change, but squeezing every last ouce out of us for the bottom line is here to stay..

  31. Snowy Silver badge
    Coat

    No

    They will expect you to do twice the work.

  32. spacecadet66 Bronze badge

    One of my colleagues has gone for LLM-generated code in a big way. It might be a coincidence that the quality of his code has gone way down and the rest of us spend a lot more time cleaning up after him than we used to. But it's not.

  33. Anonymous Coward
    Anonymous Coward

    javadoc will still exist in 2030?

    That’s depressing. I thought AI would be doing all the programming and wouldn’t need no steenking docs?

  34. Bartholomew
    WTF?

    I tried it to see if it would totally suck ... at generating 6502 assembly

    My "simple" test was could ChatGPT generate 64-bit a pseudo-random number using a 64-bit Linear-feedback shift register for a 6502 CPU and oddly enough I'm totally shocked, the generated assembly code looks bang on to me. OK, I will admit that the last time I programmed 6502 machine code was about 40 years ago! But even now bad 6502 code would scream out at me - and it looks nearly perfect. I would tweak it a bit, but any flaw is because I told it what i expected, and my example code should have been better.

    snippet of C code:

    unsigned long lfsr = 1;

    if (lfsr & 1) {

    lfsr = (lfsr >> 1) ^ 0x800000000000000D;

    } else {

    lfsr = (lfsr >> 1);

    }

    (ref: LFSR @ Wikipedia and where I pilfered the C source code ). Here is the 6502 assembly generated by ChatGPT:

    ; Assuming the following:

    ; - lfsr is stored in memory starting at address $00 (lfsr0 is the LSB, lfsr7 is the MSB)

    ; - 0x800000000000000D is stored in memory starting at address $10 (xor_val0 is the LSB, xor_val7 is the MSB)

    ; - Temporary storage is available at $20 to $27

    ; - Zero page addresses are used for lfsr and xor_val for simplicity

    ; Initialize lfsr to 1

    LDA #$01

    STA lfsr0

    LDA #$00

    STA lfsr1

    STA lfsr2

    STA lfsr3

    STA lfsr4

    STA lfsr5

    STA lfsr6

    STA lfsr7

    ; Check if the LSB of lfsr is 1

    LDA lfsr0

    AND #$01

    BEQ ShiftOnly ; If LSB is 0, jump to ShiftOnly

    ; Perform the XOR operation

    ShiftXor:

    ; Right shift lfsr

    LSR lfsr7

    ROR lfsr6

    ROR lfsr5

    ROR lfsr4

    ROR lfsr3

    ROR lfsr2

    ROR lfsr1

    ROR lfsr0

    ; XOR with 0x800000000000000D

    LDA lfsr0

    EOR xor_val0

    STA lfsr0

    LDA lfsr1

    EOR xor_val1

    STA lfsr1

    LDA lfsr2

    EOR xor_val2

    STA lfsr2

    LDA lfsr3

    EOR xor_val3

    STA lfsr3

    LDA lfsr4

    EOR xor_val4

    STA lfsr4

    LDA lfsr5

    EOR xor_val5

    STA lfsr5

    LDA lfsr6

    EOR xor_val6

    STA lfsr6

    LDA lfsr7

    EOR xor_val7

    STA lfsr7

    JMP Done

    ; Perform only the right shift

    ShiftOnly:

    LSR lfsr7

    ROR lfsr6

    ROR lfsr5

    ROR lfsr4

    ROR lfsr3

    ROR lfsr2

    ROR lfsr1

    ROR lfsr0

    Done:

    ; End of the operation

    ; Memory layout

    lfsr0 = $00

    lfsr1 = $01

    lfsr2 = $02

    lfsr3 = $03

    lfsr4 = $04

    lfsr5 = $05

    lfsr6 = $06

    lfsr7 = $07

    xor_val0 = $10

    xor_val1 = $11

    xor_val2 = $12

    xor_val3 = $13

    xor_val4 = $14

    xor_val5 = $15

    xor_val6 = $16

    xor_val7 = $17

    All I asked ChatGPT was: convert this C code "unsigned long lfsr=1; if (lfsr & 1) { lfsr = (lfsr >> 1) ^ 0x800000000000000D; } else { lfsr = (lfsr >> 1); }" into 6502 assembly

    1. Michael Wojcik Silver badge

      Re: I tried it to see if it would totally suck ... at generating 6502 assembly

      That's ... not terribly impressive, in my opinion. There's probably quite a bit of 6502 assembly in the training corpora; the 6502 was a very popular CPU in the 8-bit-PC era, and there were lots of books and articles that discussed it. This is a very specific task which is unlikely to activate any confounding features and which probably has a nice deep gradient to a correct solution.

      And what is it good for? Let's suppose you need a 6502 assembly implementation of an LFSR. And let's even suppose that's the correct thing for your actual use case (and it's not being abused for, say, cryptographic purposes). Then either you can learn enough about 6502 assembly to write one, in which case you can have some confidence in the code and you've learned something; or you can ask an LLM to churn one out, in which case maybe you have a working implementation that you don't understand.

      For commercial software, on average, most of the cost is in maintenance. Robbing the development team of understanding now will accrue considerably more cost in the future.

      1. jake Silver badge

        Re: I tried it to see if it would totally suck ... at generating 6502 assembly

        "That's ... not terribly impressive, in my opinion."

        No, it's not. My C compiler could target the 6502 'way back in the late 1970s ... and it ran on a lot less hardware.

  35. JamesTGrant Bronze badge

    Demand from employer will ALWAYS exceed the resources the employer has

    A large/medium-sized business that uses software development to make things to sell to other businesses generally always takes on more commitments than it has people able to deliver them.

    The ‘amount’ of actual, end user useful functionality that a software engineer/developer can produce in a given time is doubtless many times more than it was even 20years ago (in my experience - I started work when e-mail was still pretty radical). Does the business say ‘well done, you’ve done enough this week - put your feet up!’? Of course not, the amount of output the business loads onto its ‘doing’ people is always going to be greater than 100% of the time available to the developer to do it. When it drops, then the business fairly quickly invents new projects, or makes people redundant.

    Then there’s a layer of people employed to make ‘judgement calls about relative priorities’….

    Personally, I think ChatGPT is an incredible tool. It usually provides me helpful output that shows me approaches I’d not considered or functions I hadn’t thought to use. Is its code output good? No, generally not! Does it help me solve problems and do coding? Yes. Can it waste a ton of time - sometimes. Is it a tool requiring consideration as to when to utilise it? Yes.

  36. Vader

    Let's see, if they coded it properly in the first place there work load would be alright.

  37. The Indomitable Gall

    ""Considering that by 2040 other researchers have suggested most code will be written by machines,"

    I already have multiple 2040s, and I haven't been able to get a machine to write a single program for one.

  38. phuzz Silver badge
    Unhappy

    I think there's a slight typo in the title, I'm pretty sure it should read:

    By 2030, software companies will be using AI as an excuse to cut their workforce 'in half'

  39. Groo The Wanderer Silver badge

    I'm 60 this year. Standard retirement age in Canada is 67, at which point all the government benefits you are ever going to get kick in. If I'm retiring by 2031, you can bet I'm not going to tread the legal quagmire of LLM generated code, copyright and licensing violations at model source (known MAJOR issue), and the thorny issue of "who owns the code?" likely unresolved for several years to come. I'll stick to the way I've done my work for 30+ years... by relying on my own experience, skillset, and the assistance of search engines and a lot of reading.

  40. Locomotion69 Bronze badge

    By 2030 Software QA ...

    ... has to triple their effors by current standards to detect and delete

    - malware plugins

    - backdoors

    - stolen intellectual property

    - copyright violations

    - merges of code snippets under incompatible licenses

    - unused code

    - incorrect references to external dependencies

    - invalid external dependencies

    - offensive terminology

    - untrusted sources

    -...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like