back to article 75% of enterprise coders will use AI helpers by 2028. We didn't say productively

Global tech research company Gartner estimates that by 2028, 75 percent of enterprise software engineers will use AI code assistants, up from less than 10 percent in early 2023. As of the third quarter of 2023, 63 percent of organizations were piloting, deploying or had already deployed AI code assistants, said the survey of …

  1. cantankerous swineherd
    Devil

    get your backdoors in the training data now, if you haven't already.

    1. Throatwarbler Mangrove Silver badge
      Joke

      I've already embedded a "woke mind virus," which will cause the AI to only respond when referred to by its preferred pronoun(s), which change at arbitrary and unexpected intervals!

      1. Cav Bronze badge

        The opposite of woke is mindless.

  2. Ashto5

    Let’s hope it improves

    So far I have had to rewrite much of what was offered by these tools.

    They will get better I am sure but honestly right now I am not worried.

    1. steviebuk Silver badge

      Re: Let’s hope it improves

      True. One thing I do find them useful for, well ChatGPT 3.5 anyway, is take some code I'm unsure what its doing and give it the code and its quite good at telling me what each bit is doing.

    2. ecofeco Silver badge

      Re: Let’s hope it improves

      You can be sure it will NOT get better.

      Have you seen the last 20 years?

      1. Michael Wojcik Silver badge

        Re: Let’s hope it improves

        Oh, it'll get better, because that bar is very, very low.

        By the same token, SotA LLMs trained on large source-code copora can already complete mundane tasks (e.g. "write an Android app that does X" where X follows the well-worn pattern of CRUD operations on a simple database, front-ended with simple forms and views) as well as a great many professional developers. That's because a great many professional developers don't do anything very challenging; they're producing software that's not significantly more complicated than what I used to have the students in my university web application design class do.

        (Those students were undergraduates in either the User Experience or the Professional Writing major, not CS or similar — CS students would have been doing much more interesting programming than that. This class was meant to give the students some insight into how software works and what the development process is like, so they'd be better able to communicate with developers.)

        Now, that's not the sort of software I work on, and I imagine the same is true for many Reg readers. But it's important to remember that the range of difficulty and sophistication in professional software development is huge, and the range of developer knowledge and skill is commensurate. There absolutely are professional programmers who could be replaced today with LLMs with no discernible loss of quality or quantity of output, just as there absolutely are professionals who do work that current SotA models are far, far away from generalizing to.

    3. spireite Silver badge

      Re: Let’s hope it improves

      Where I've used it is in quickly finding me boilerplate, to give me an idea - not necessarily solution.

      For example, Terraform configurations...

      caveat: some of them can be found on the TF site, but some couldn't.

  3. Mage Silver badge
    Unhappy

    Gartner estimates that by 2028, 75%

    But if reality bites it could be 10%.

    Unfortunately Senior Manglement listens to vendors and juniors, not experienced devs in their own company, because they are saying what Manglement wants to hear.

    What's Gartner's track record on these predictions and are they using so-called "AI"?

    1. Michael Wojcik Silver badge

      Re: Gartner estimates that by 2028, 75%

      I'm giving Gartner some credit on this one, since they correctly (if too conservatively) applied Amdahl's Law: A time reduction factor (inverse of speedup) of X in a fraction Y of the overall job (X, Y < 1) gives you a total speedup of XY.

      As it is, I think they're overestimating both X and Y. For decent programmers, I doubt LLMs will provide a 0.5 time reduction in writing code, and more importantly, few good developers should be spending even 0.2 of their time actually writing code. There may be brief periods while implementing major new features where developers produce a lot of source, but those should be unusual. Design, testing, addressing technical debt should all be taking priority. And in particular, people who are generating the sort of code which an LLM can hand to them are probably reinventing some wheel, when they ought to be reusing an existing implementation.

  4. mhoulden
    Terminator

    I can imagine the remaining 25% will be trying to switch it off. It's bad enough trying to do something in Power Automate without Copilot popping up all over the place like a hyperactive Clippy.

  5. Filippo Silver badge

    Overhype

    Now Gartner is saying 'beware of high expectations'? Gartner?

    1. steviebuk Silver badge

      Re: Overhype

      I fucking hate Gartner. I can't remember the full story now but was related to Equifax. Something about them praising them then they had the massive data breach.

      1. CowHorseFrog Silver badge

        Re: Overhype

        Its sad in a world of technology that bullshit like Garner even exists.

    2. Michael Wojcik Silver badge

      Re: Overhype

      Well, yes. These are the people who popularized the term "hype cycle". "Beware of high expectations" is one of their key messages.

  6. Will Godfrey Silver badge
    Coat

    First they came for the enterprise coders...

    ... but I was not one of those, so said nothing.

    ...

    ...

  7. Joe W Silver badge

    Maybe for writing tests ?

    Maybe... dunno, tests for unexpected inputs and stuff. Surely not for testing actual requirements wrt. rules, correctness of calculations...

    Well, we'll have to see. I'm not worried, these AI assistants were trained on shite code, the results will be crap as well. GIGO, as it is known (garbage in, garbage out).

    1. heyrick Silver badge

      Re: Maybe for writing tests ?

      Wait a decade. We'll be firmly in the GIGO economy and nothing will work (only manglement will not know because the AI that answers the phone won't work either).

      You at in a maze of twisty little passages, all alike.

      1. ecofeco Silver badge

        Re: Maybe for writing tests ?

        "Welcome to Costco! I love you!"

    2. Anonymous Coward
      Anonymous Coward

      Re: Maybe for writing tests ?

      If you write code with a lot of debug mode assertions (even if they are removed for release) then generating random test cases to blitz those assertions - that sound plausible. But the work is being done by the assertions. So it would only count as automated if the quote "AI" decided and inserted the critical assertions along with the code. (Haven't seen that so far, although I've never asked for it).

      Otherwise, test cases usually require input and expected output preparation. Volume alone is not enough - the test cases should be designed around the way the code branches. I haven't had experiences that assure me that "AI" assistants currently can be trusted to do that alone.

      I do however use a code assistant and find it truly helpful and actually fun to use. I would describe it's ability as pushing the boundary between syntax and semantics - and in the process offering empirical evidence that the boundary between syntax and semantics is a gray zone.

      1. Tim 11

        Re: Maybe for writing tests ?

        In the future I can quite imagine an AI would be able to examine the code to identify test conditions and generate a complete set of regression tests.

        It couldn't prove the existing code was doing what the user wanted but could ensure nothing got accidentally broken by a bug fix or an update to third-party components or execution environment, which is the main reason we use test automation.

        This could apply to end-to-end testing as well as unit tests.

    3. Michael Wojcik Silver badge

      Re: Maybe for writing tests ?

      It's pretty straightforward for an autoregressive model with a large context window to write unit tests, particularly when the target uses a language that has a strong unit-test framework already available.

      The problem is that such tests would only confirm that the units do what they're written to do. Whether that has any relationship to what they're intended to do is quite another story. The point of unit testing isn't so much to exercise the code as it is to exercise the expectations.

  8. steviebuk Silver badge

    Its still shit

    My partner watches Real Housewives and knows it so well said it would be her specialist subject. So as a bit of fun I said I'd ask ChatGPT for some quiz questions and answers. 10 questions, about 4 had same answer and only 2 or 3 were actually right. The others were just wrong and it made shit up.

    1. Throatwarbler Mangrove Silver badge
      Holmes

      Re: Its still shit

      Even AI has standards.

  9. steviebuk Silver badge

    Mark my words

    It will lead to another Horizon scandal. People are using it to summerise reports and emails and not checking the results. We are testing CoPilot that summerises meetings. It claimed I said something during the meeting that was never said.

    Years to come, people will use that as evidence. "Well the AI said you said it"

    The likes of DWP will abuse it to summerise benefit checks and miss something critical causing a death.

    1. ecofeco Silver badge

      Re: Mark my words

      It's going to be FAR worse than that.

    2. 43300 Silver badge

      Re: Mark my words

      "The likes of DWP will abuse it to summerise benefit checks and miss something critical causing a death"

      That might be what they want - they don't exactly have an unblemished track record for supporting people in genuine need.

      Perhaps the NHS could use AI to deal with their complaints? Training the AI for this wouldn't be difficult: whatever the complaint is about, persistently deny, obfuscate, patronise, claim black is white, take as long as possible over it, and always, always work from the basis that the complainant is lying / deluded and the NHS has been perfect in every respect. If it could manage all this it would be indistinguishable from the current system which uses actual people...

      1. Cav Bronze badge

        Re: Mark my words

        You've obviously been asleep for the last 20 years. NHS scandals, and the complaints thereof, are reported on and acknowledged ever other month.

        1. 43300 Silver badge

          Re: Mark my words

          I know - and nothing ever seems to improve!

          I have had first-hand experience of the NHS complaints process in the past 18 months and it's been very clear that it's designed to defend the NHS, not to objectively look at complaints.

    3. Anonymous Coward
      Anonymous Coward

      Re: Mark my words

      "People are using it to summerise reports"

      In the WINTER TIME?

      and: CoPilot that summerises meetings.

      Get a dictionary. URGENTLY.

      1. steviebuk Silver badge

        Re: Mark my words

        Yes. Summer is warmer so we like to summerise then.

        Jesus, one letter mistake, I'd hate to be around you when someone spills a bit of milk.

  10. Andy Moreton

    20%, if only!

    As a developer in a big bank I wish I could spend as much as 20% of my time coding. For me it's more like 2%

    1. Throatwarbler Mangrove Silver badge
      Angel

      Re: 20%, if only!

      Speaking of which, I just bounced your release ticket. You'll need to get it reauthorized by your manager, the release manager, and Griffin.

  11. ecofeco Silver badge
    Mushroom

    And it all fell GIGO to bits!

    Riff of the old phrase, "It all fell gloriously to bits."

    Here's another: "Yo! We heard you like GIGO, so we added more GIGO to your GIGO! That's right! We pimped your GIGO!"

  12. CowHorseFrog Silver badge

    Why does anyone listen or pay for Gartner ?

    If they actually knew anything they would be writing their own software and selling it to the world...

  13. trevorde Silver badge

    Elephant in the room

    AI using GPL source code

  14. wowfood

    Iffy

    I'm still iffy on AI.

    Using it to generate code has, in all honesty been terrible. It'll just generate stuff that doesn't work, or uses libraries that don't actually work together.

    But I have found (on home projects) that if I state something like. "I am trying to achieve X, here's the code samples I've tried" it'll point out quite accurately why my code is failing, and tends to give an alternative that (with tweaking) will resolve my issue, and it's actually been a boon for learning about less common coding techniques that are very handy.

    All in all, it's nowhere near ready yet for general use, but as a lookup tool I've found it handy.

    Just rather than treating it as a code writing tool, treat it as a reference tool and keep using your head. Only problem I'm seeing right now, is less skilled coders / new coders, over relying on it and not actually learning from what it generates so you have butchered code popping up in questions from people.

  15. Bebu
    Windows

    Amdahl's Law

    «Even if you're getting 50 percent faster task completion [on coding], that's only going to be 50 percent of 20 percent. So that means only 10 percent greater improvement in overall cycle time »

    Unknown to manglement types Amdahl's Law pretty much says this. Although long before my time "Time and Motion Studies" were a management tool to identify and quantify resource usage and potential bottlenecks and inefficiencies but those managers were probably made of sterner stuff - managing the logistics of the 1944 D Day Normandy landings probably slightly more demanding than a software development team. :)

    I imagine any vocation that is in imminent risk of befoulment by AI/LLM is not likely to attract stellar talent. The hard part is deciding what activities would be exempt since at present its AI secret sauce with everything.

  16. Thomas Martin

    This is the type of stuff that puts false information and expectations in people's and programmer's minds. Yes, AI can generate code but even if it works, the programmer may still yet have hundreds hours of debugging and customization and even then it still may not do what they want. I programmed 33 years and there is nothing like having humans do it. AI cannot work with people. I saw an advert on TV today hawking I* W*****X and showed it generating screens of code and a supposedly happy programmer taking off. It may generate something but as another someone said, things don't work and/or don't work together. It even may make inadvertent back doors. AI can generate, but can it police itself and find its errors and problems?

    Something to think about...

  17. Anonymous Coward
    Anonymous Coward

    A new management layer

    Management (c/o your local golf club)

    ——————- Ai ——————

    You

  18. Anonymous Coward
    Anonymous Coward

    I had someone at work ask for help with his code

    It turned out he asked ChatGPT to write it. The code didn’t work because ChatGPT invented a function that didn’t exist in the library it was using.

    1. Michael Wojcik Silver badge

      Re: I had someone at work ask for help with his code

      See, if ChatGPT were any good, it would have filed a bug report against the library.

  19. xyz Silver badge

    Considering....

    That I failed miserably to book a car parking space at an airport yesterday afternoon due to the shite nature of the "UX" involved and that hacking through my own annotated code from 5 years ago leaves me wondering what substances I'd been exposed to, I can think of nothing worse than letting a plastic developer (aka an AI) cobble together lines of stuff I have to wade through to check. Also Gartner wouldnt know its corporate ass from its corporate elbow.

    "AI" is good for woolly or statistical analysis but for doing the dev, give me a break. We've all been down to StackOverflow, so an AI doesn't have to.

    :-)

  20. druck Silver badge
    FAIL

    Tl;dr

    AI helps shit coders produce shit code faster.

  21. RedGreen925 Bronze badge

    100% of Gartner studies will be replaced by AI soon. And Christ do I ever look forward to that day as much as I despise the AI hype garbage...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like