The Register Home Page

back to article AI slop got better, so now maintainers have more work

If AI does more of the work but humans still have to check it, you need more reviewers. Now that AI models have gotten better at writing and evaluating code, open-source projects find themselves overwhelmed with the too-good-to-ignore output. For the curl project, that has meant less AI slop and more demand upon maintainers …

  1. williamyf Silver badge

    Everything old is new again

    If you substitute "AI generated bug report" with "Automated fuzzing tools", is like reading an article from circa 2002

    The workload of software maintainers increased significantly, because researchers used automated tools to find significantly more bugs...

    the survived the onlaught then, and the maintainers of today will survive the onslaught too...

    1. MachDiamond Silver badge

      Re: Everything old is new again

      "the survived the onlaught then, and the maintainers of today will survive the onslaught too..."

      Yeah, when it's discovered that AI is too expensive. Many tools are free or very low cost now as Vulture Capital is stoking the boilers. Once the companies have to close the loop and show a handsome profit, many are going to die. The survivors will raise their rates and the usages will be paired down to only those with the highest ROI.

      The horsepower for chewing through capital is increasing at a furious rate. The desire of people to turn a few bucks into millions with no work is growing and the wolves are going to have all of their hides tacked up on a barn wall as an example for the next generation of people that have been convinced that thinking and maths are so last generation.

  2. NapTime ForTruth

    Given this context, it seems the logical response to an excess of plausible or accurate bug reports delivered by external AI is to pass them along for replication, validation, and correction (You down with R-V-C? Yeah, you know me!) via local internal AI.

    ...on the assumption that either and both AIs are reliable, consistent, and effective for that work, of course. Plus the fun of watching the afternoon bot-slop battles unfold.

    If AI (writ large) can get its sh...tuff together, quality assurance becomes fully automated, development and cleanup costs drop to sub-minimum wage levels, and nobody has to go to work ever again...nor have access to food or shelter.

    That's a big win for somebody, innit?

    1. Anonymous Coward
      Anonymous Coward

      Downvoted because you forgot the sarcasm icon.

    2. Claptrap314 Silver badge

      It's a big win for the token sellers, I'll grant you that much.

  3. Pascal Monett Silver badge
    Devil

    "someone still has to verify them"

    Sorry ? Isn't AI the sum of human achievement ? An actual bloodbag has to verify the results ?

    What's the point ?

    Are we actually saying that <gasp> AI can't be trusted ?!?

    1. MachDiamond Silver badge

      Re: "someone still has to verify them"

      "Isn't AI the sum of human achievement ?"

      Nope, it's a really shiny thing being used to lure the masses into the "mechanical separation" room to turn them into chicken nuggets (you are what you eat, right?)

      Humans are the best at the things that humans need/want/desire. An AI system is likely to optimize itself into something that feeds the next AI (AI's all the way down). Software needs to perform tasks that are needful for people. It has to make sense to people that interact with it. It has to be able to be guided as requirements change without total rewrites and whole new UI's. Humans are already bad enough at changing things for the sake of change.

      1. LionelB Silver badge

        Re: "someone still has to verify them"

        Also…

        > An AI system is likely to optimize itself into something that feeds the next AI (AI's all the way down).

        Errm, no, an AI system ain't going to "optimise" itself into anything not prescribed by human design. How exactly do you imagine that's going to happen? Current AI has no agency, no motives, motivation, or goals, beyond those engineered by humans1. At this point, that's magical thinking.

        1The fact that the human engineers may not understand what's going on in the "black box"—how their models actually achieve the functionality they do—does not change this. That AIs are going to spontaneously develop motives, motivations and goals just because they get "really, really complicated" and we don't understand them terribly well may be a stock sci-fi narrative, but one with no basis in rationalism.

    2. LionelB Silver badge

      Re: "someone still has to verify them"

      Wait… are you actually saying that <gasp> humans can be trusted ?!?

      The correct question is: can AI be trusted (at task xyz) more than humans? (Okay, so as things stand task xyz is a very short list.)

  4. DS999 Silver badge

    Its a shorter term problem

    AIs can't continue finding vulnerabilities at the same rate indefinitely, because presumably they'll (finally) be getting fixed at a rate higher than they're being added.

    1. Dan 55 Silver badge

      Re: Its a shorter term problem

      The problem is it makes up shit at a much higher rate than finding vulnerabilities.

      1. LionelB Silver badge

        Re: Its a shorter term problem

        But does it, though? Do we have benchmarks for that? (Genuine question.)

    2. doublelayer Silver badge

      Re: Its a shorter term problem

      AI can also submit new code. That new code can work. Then another AI can find security problems in the submitted code. This can spiral forever with the creation and resolution of bugs that are security-relevant. One problem, even if the security detection and resolution AIs were both perfect, is that bugs in the user experience are much harder to detect by inspection when there's no simple model of what users intend the code to do, so it's difficult to identify automatically when it's doing the wrong thing, and LLMs aren't driving the software anyway. The attempts to clean this up are on a few maintainers who were already burned out, and now they've got a lot more stuff to review and a mental model of what everything does and how it does it to try to keep stable. The pain will increase.

      1. LionelB Silver badge

        Re: Its a shorter term problem

        > AI can also submit new code. That new code can work. Then another AI can find security problems in the submitted code. This can spiral forever with the creation and resolution of bugs that are security-relevant.

        Of course that works perfectly well (in fact it's right out there in the wild) if you substitute "humans" for "AI".

        Don't disagree on the rest. Just to point out that there seems to be a hidden assumption in a lot of the narrative around this that AI has to be much better than humans at software engineering to be useful… well, to be useful to someone (e.g., the manager looking to replace you with a cheaper option). For that the AI only needs to be at least (almost) as good as the average (as opposed to top-end) human software engineer. And, like it or not, in some respects they already (almost) are.

  5. TheMaskedMan

    Hmm. So, in order to keep up with the bug reports, we stop paying people to report bugs. Now we have less bugs reports, and work returns to manageable levels. The downside being that we still have bugs, but at least they're unreported. Out of sight, out of mind, as the saying goes.

    Trouble is, they're not out of sight for everyone. Somewhere, a busy little LLM is still finding them at a rate terrific, not to fix but to exploit. And sooner or later it will find some worth exploiting. With no incentive to report them, the more mercenary researcher's next best option is exploitation. I think this policy might need a rethink.

    1. find users who cut cat tail

      If you run out of money to pay for vulnerability reports, you cannot pay for them. You can rethink as much as you want. I do not think it will generate any new bug bounty money.

      So who is going the pay for the reports? Companies profiting from all this who have long history of avoiding any contribution to the development?

  6. Pete 2 Silver badge

    Technical debt

    > Instead, we get an ever-increasing amount of really good security reports, almost all done with the help of AI."

    The point being that only some of the AI discovered bugs find their way to the support teams. The rest are discovered by people or groups with less honourable intentions. To exploit these bugs for their own ends.

    But all of this just illustrates the quality of code that gets released. Either on a "yes, we know - we'll fix it later" basis, or simply because testing is boring and coders prefer to write new and exciting (if flawed) stuff.

  7. Arkitekt

    The models have gotten better.

    But have they really??

    1. MachDiamond Silver badge

      "The models have gotten better."

      Better at what is the question to be answered. Elon started measuring performance in power consumed so many metrics are completely useless. Even pure computing metrics such as COWflops* are often quite useless. There has to be useful output before efficiency and value can even be considered.

      *Complex Orthogonally Weighted Floating Point Operations per Second

      A dose of cute <https://www.youtube.com/watch?v=4kPfBM6XkGc>

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon