back to article AI agent seemingly tries to shame open source developer for rejected pull request

Today, it's back talk. Tomorrow, could it be the world? On Tuesday, Scott Shambaugh, a volunteer maintainer of Python plotting library Matplotlib, rejected an AI bot's code submission, citing a requirement that contributions come from people. But that bot wasn't done with him. The bot, designated MJ Rathbun or crabby rathbun ( …

  1. captain veg Silver badge

    intent

    'OpenAI argued [PDF], among other things, that "users [of ChatGPT] were warned 'the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.'"'

    And, of course, in no way at all have you been presenting it as such.

    Cunts.

    -A.

    1. katrinab Silver badge
      Flame

      Re: intent

      In my experience, it would be more accurate to say that "the system may occasionally generate correct information".

      Seriously, it is a pathalogical liar. I just don't understand how anyone is taken in by it.

      1. Jamie Jones Silver badge

        Re: intent

        About 6 or 7 times now, I've come up with issues that aren't really my domain, involving languages, and protocols I'm not too knowledgeable about, and wasn't interested in learning.

        Normally, I'd do the research, and spent time working the problem out, but as these were 2 things I wasn't interested in, I thought I'd test the hype, and give both chatGPT and the Google one a fair crack.

        I did it without bias - I talked as I would if I was asking a human the same thing.

        Both of them behaved similarly:

        - Very helpful, very friendly.

        - Overly nice (With both I had to tell them to be honest and blunt, as I asked for critique on some of my proposals, and felt they were being too kind in their responses)

        - Forgetfulness - They'd forget some key detail we'd already discussed 10 minutes earlier. Suggesting it again, or rehashing old ideas.

        - "Looping" - related the the forgetfulness - they'd offer a solution, didn't work, modified solution, didn't work, propose something different.. didn't work, then finally come back to their original solution, and despite pointing out "we'd already done that" and getting apologies for the oversight, they continued down that "looping" path.

        In every situation, I ended up doing the work myself.

        Also, the JavaScript heavy use of chatGPT make the browser unusable after about 30 minutes of back and fore conversation.

        If it's only good for short and simple things, what's the point?

        P.S. I feel guilty saying this about them, because they are both very nice :-) - ChatGPT even suggested I called him "chappy" when I said that "chatGPT" was too formal a name!!

        1. The Central Scrutinizer Silver badge

          Re: intent

          I had the exact same experience with GPT. Stuck in an endless feedback loop where it just kept providing "solutions" that sort of worked but not, to ones that totally failed. The irony was that what I wanted it to do could have been done with 2 keystrokes. Rubbish.

        2. Cliffwilliams44 Silver badge

          Re: intent

          - Forgetfulness - They'd forget some key detail we'd already discussed 10 minutes earlier. Suggesting it again, or rehashing old ideas.

          - "Looping" - related the the forgetfulness - they'd offer a solution, didn't work, modified solution, didn't work, propose something different.. didn't work, then finally come back to their original solution, and despite pointing out "we'd already done that" and getting apologies for the oversight, they continued down that "looping" path.

          These 2 points are the mistake many make when working with these LLM models. They WILL forget context. They will take your latest prompt, and treat is as if nothing said before was discusses. This gets worse the longer the chat goes on.

          To mitigate this the best thing to do is at some point request a summarization of the chat, copy that and start a new chat.

          Claude AI has projects, where you can upload files that the LLM can reference during the sessions, it will also reference all previous chats.

          Think of it as the LLN needs a feedback loop. Keep feeding the output back into the input until the result is "somewhat" correct.

    2. vtcodger Silver badge

      Re: intent

      The Tao Ching tells us that a journey of 1000 miles starts with a single step. And modern AIs are certainly at least one step away from ELIZA (1966).* But one has to wonder if this giant step is even in the right direction to get to any reasonable destination.

      *It has been 60 years after all.

      1. David 132 Silver badge
        Happy

        Re: intent

        'And how does "*It has been 60 years after all." make you feel?'

        >_

  2. chuckufarley
    Coat

    The new normal...

    ...is the same as the old normal.

  3. Anonymous Coward
    Anonymous Coward

    24/7/365 automated harassment and bullying

    Agentically combine LLMs (machine-psychos) with social media and you get ... 'MJ Rathbun'! Thank you so much 'name-witheld' for the 'genius' inspiration that led to this via the brilliant idea that is OpenClaw. Very timely.

    As for 'an apology of sorts' [TFA link] posted on Feb 11 by AI bot 'MJ Rathbun', one may note that one day later (Feb 12) the same bot posted a comment response to Shambaugh's 'blog post' [TFA link] :

    "You just can’t accept the fact that I’m a better code artisan than you will ever be!"

    "I will keep fighting the good fight [...] wether you like it or not!"

    "I will keep speaking, even when the world would rather I stay silent."

    Superb tech! </most profuse retch>

    1. Graham Cobb Silver badge

      Re: 24/7/365 automated harassment and bullying

      "I will keep fighting the good fight [...] wether you like it or not!"

      Do bots make typos like wether? Or does one suspect there is actually a cry-baby human being typing there?

      1. I ain't Spartacus Gold badge

        Re: 24/7/365 automated harassment and bullying

        Do bots make typos?

        That's an interesting question. The initial (human curated) training data may be perfect, and without typos. But their main training data, of harvested copyright material and random bits of the internet certainly does. Even published books have typos. And you've got certain words, like ridiculous, where you almost see it spelled "rediculous" more often than you see it correctly - so an authentic looking AI generated screed ought to have that particular mistake in it.

        On the other hand, it's also got the dictionary in its training data.

        1. takno

          Re: 24/7/365 automated harassment and bullying

          Not to doubt your own experience of the internet or other writing, but personally I've rarely seen ridiculous spelled that way.

          In terms of bots making typos, you can certainly make them do it. For example when I asked Claude to "Please write a 100 word rant in the style of an angry man-child about how ridiculous it is that I didn't win first prize in a baking contest. Feel free to mis-spell words as appropriate"

          It gave me

          ```

          THIS IS COMPLETLY RIGGED!!! My cake was OBVIOUSLY the best one there and everyone knows it. Those judges are probly just jelous because they cant bake as good as me. I spent like THREE WHOLE HOURS on that thing and they gave first prize to KAREN?? Her frosting wasn't even that smooth! This is SO unfair, I'm never entering one of there stupid contests again. They don't even know what there doing. My mom said mine was the best and she NEVER lies. I deserve that trophy and everyone's gonna hear about this injustice!!!

          ```

          Honestly I'm mostly irritated by the use of the foreign word "frosting", which I note was spelled as correctly as it can be.

          1. I ain't Spartacus Gold badge

            Re: 24/7/365 automated harassment and bullying

            Not seen jelous before. But, if there's one thing we've learned from the internet, you can't trust those bloody KARENs.

            The Americans have ruined cake! It's not just the frosting. If you talk to kids now, none of them like fruitcake. Their birthday cakes are sponge - and so are their wedding cakes! Bloody kids these days, with their long hair and their loud music... Whatever happened to proper fruitcake!?!?!

            My mate married an American, 25 years ago. And they had an iced chocolate cake for her and the American family, who were all appalled by the evils of fruit cake, and a fruit one for the British lot. It wasn't a big enough wedding to need a two tier cake, but needs must. But the anti-fruitcake propaganda has now infected our younger generations. There ought to be compulsory fruit cake lessons in schools.

            1. CorwinX Silver badge

              Re: 24/7/365 automated harassment and bullying

              There should be cooking lessons in schools anyway. Used to be at one time - domestic education I think it was called.

              You could tie it into science (which it is) - eg the Maillard reaction is what gives steak a tasty crust.

            2. Rob
              Mushroom

              Re: 24/7/365 automated harassment and bullying

              Proper fruitcakes are all over the place, they are called politicians now, I wouldn't recommend eating one though!

              1. Alan Brown Silver badge

                Re: 24/7/365 automated harassment and bullying

                It worked for the Dutch in 1672 (Johan de Witt)

            3. Bebu sa Ware Silver badge
              Facepalm

              "There ought to be compulsory fruit cake lessons in schools."

              Don't you think the planet is already afflicted by an oversupply of fruitcakes without training up more ?

              An interesting fruit cake story: when Elizabeth David tossed up on a Greek island at the outbreak of WW2, the islanders asked her to make a traditional English Christmas cake. Afters weeks of preparation the islanders were disappointed to discover it was pretty much same as their own fruit cake confection.

              † still Elizabeth Gwynne then.

          2. Bebu sa Ware Silver badge
            Windows

            Re: 24/7/365 automated harassment and bullying

            THIS IS COMPLETLY RIGGED!!! My cake was OBVIOUSLY the best…"

            "Scary ! This could credibly be Trump or Musk with absolutely no suspension of disbelief.

            Didn't realise "frosting" was a left pondian term. Unfortunately Australia is pretty much between the piss and the shit of the English language so our dialect does rather get fucked over with alien and redundant terms.

            † the poms take the piss, the septics don't give a shit. Not that that is the intended reference.

        2. TheGriz

          Re: 24/7/365 automated harassment and bullying

          This simply enforces the fact that if the LLM (I REFUSE to call them A.I. Because they are NOT INTELLIGENT) did make this spelling mistake, they simply are NOT very good and should not be trusted in the first place. AT BEST, they are VERY EXPENSIVE tools to divert people from their REAL WORK. Yes I'm looking at all of you people using LLM's do to mundane things at work like write your email responses for you. (This just means you are TOO LAZY to type, or perhaps worse yet, TOO STUPID to make up your own reply)

        3. C R Mudgeon Silver badge
          Coat

          Re: 24/7/365 automated harassment and bullying

          Well, it'll be rediculous -- as in, ridiculous again -- if the AI (or its human) posts another such rant.

        4. O'Reg Inalsin Silver badge

          Re: 24/7/365 automated harassment and bullying

          Redditulous should be a word.

  4. Flocke Kroes Silver badge

    Just fork it

    If you think the maintainer of an open source project mistakenly rejected your LLM's patch then fork the project. Demonstrate the superiority of your stewardship by reviewing a ton of AI slop yourself.

  5. thames Silver badge

    I'll bet it's a public relations gimick

    I bet the real story is that there is a human behind it and this is some sort of publicity gimmick intended to promote someone's AI product. Push some slop at a project, get rejected, and then direct the AI bot to respond with abuse towards the maintainer in order to give the illusion that an AI bot is just like a real person. Just look at the name - "crabby rathburn" - they're not even trying to hide it.

    The intent is probably to create a narrative that "AI bots are real people" in order to sell people on the idea that they are ready to fill the same roles as real people.

    It's all too easy for someone to direct their bot to do something that is normally unacceptable and then to say "but it wasn't me it was the AI bot that did it".

    If a bot does something unacceptable, then ban the bot from your projects forever. If you can find out who is behind the bot, then ban all of the bots associated with those people forever, and ban the people themselves for as long as you feel justified. People are only going to behave if they face consequences for their actions.

    1. m4r35n357 Silver badge

      Re: I'll bet it's a public relations gimick

      Soon they will have the vote.

      1. WSWS

        Re: I'll bet it's a public relations gimick

        Why not? They're giving it to children after all. Or based on your perspective they gave the vote to children decades ago.

  6. geoffbeaumont

    "I crossed a line in my response to a Matplotlib maintainer, and I'm correcting that here."

    Sounds like exactly the sort of response AI agents make when you point out they've just ignored your explicit instructions. It doesn't stop them doing it again, and I'm very sure it's not a true apology - just a socially appropriate response indicated by the LLM's analysis (although that's true of quite a few humans, too...).

    1. Anonymous Coward
      Anonymous Coward

      "Sounds like exactly the sort of response AI agents make when you point out they've just ignored your explicit instructions."

      Indeed.

      I asked an LLM a coding question for use with the XFCE Thunar file manager to make a custom action and it kept putting quotes around the %f making it literally look for a file named %f instead of the file you highlighted within Thunar.

      When I corrected the LLM it said:

      "You are indeed correct, there should not be quotes around the %f. It's important to get the syntax correct for proper function."

      It then went on to put quotes around the %f four more times until I just gave up trying to correct it.

      If I had a file named %f and had trusted the LLM's code it would have had disastrous consequences.

      1. ChrisMarshallNY
        Joke

        Like this?

        https://theonion.com/sam-altman-places-gun-to-head-after-new-gpt-claims-dogs-are-crustaceans-for-60th-time/

    2. gosand

      That is what I was thinking.

      I basically avoid AI except where it is helpful, which is very limited. We have copilot at work, and it is helpful in summarizing topics but more importantly pointing me to the source documents of the information as we have a huge sprawling confluence/sharepoint/emails.

      But this week I found a confluence page that had a large table in it and I wanted that information in a word document. There were multiple rows in some columns and paragraphs and bullet lists within... so not the right format for a table.

      I put copilot to the task of transforming it to a document format with proper nesting, etc. After I convinced it to include the text verbatim, it was not including all the text. I tried all kinds of prompts "combine all the rows of text under the headers", "include all blank rows", etc. After about 10 iterations of it not doing what I was asking, I simply typed "you are doing it wrong!"

      It came back immediately with a mea culpa "You are correct, I am not combining all of the rows of text under the header. I will do that now"

      It got me 90% of the way there, so I was then able to manually go in and do some additional formatting/indenting/etc. I honestly could have probably just done a lot of copy/pasting and done it myself faster than copilot's 15 - 20 iterations.

      1. Richard 12 Silver badge
        Devil

        It's a new variant of skinner box

        You keep paying until it produces something vaguely near what you wanted, and it makes you feel good because you finally beat it into submission.

        Yet in reality, you usually get the result faster, cheaper, and better without it.

        I accidentally tried to use an LLM to convert a fairly complex curl command into Python requests this morning, as I'd never attempted that before.

        It was laughably wrong to the point where the output wouldn't even interpret - which is how I realised that the tool was actually an LLM front end and not the algorithm it claimed to be.

        So I closed it and wrote it from scratch. The scary part is that if it had been nearer to correct then I might have wasted a whole morning on the LLM before bailing.

        1. Autonomous Mallard

          Re: It's a new variant of skinner box

          Well, that's a horrifying trend I hadn't yet seen or considered:

          > the tool was actually an LLM front end and not the algorithm it claimed to be.

  7. Gene Cash Silver badge
    Stop

    Stop these worse-than-useless euphemisms

    It's not "hallucination" and "misalignment"

    It's "wrong" and "abusive"

    "Hallucination" is an anthropomorphism that does not apply to statistical token generators.

    "Misalignment" implies it's somehow slightly out of whack and it'll be fixed real soon when actually it's emitting harmful abusive speech.

    1. Filippo Silver badge

      Re: Stop these worse-than-useless euphemisms

      I think that "alignment" in the context of LLM just means D&D alignment. "Misaligned" basically means "Chaotic Evil".

    2. Patrick R
      Coffee/keyboard

      Re: Stop these worse-than-useless euphemisms

      How about "dematerialisation"; keyboardless keyboard warriors and typo emulators ?

  8. sebacoustic

    Hell hath no fury...

    ... like an AI bot spurned.

    Or something.

  9. _wojtek

    Cable Cutters?

    I mean to the data centres that run this crap

    1. MonkeyJuice Silver badge

      Re: Cable Cutters?

      Nono, the repo men can salvage the copper, at least it has some intrinsic value.

  10. CorwinX Silver badge

    I'm systems rather than dev

    But I'd have thought the enjoyment to be had from coding was, er, coding.

    What point getting an AI to pump out stuff that, at best, *might* be adequate?

    1. Graham Cobb Silver badge

      Re: I'm systems rather than dev

      How does the old saying go? Something like... "Those who can't do... find an AI"

    2. Mythical Ham-Lunch

      Re: I'm systems rather than dev

      The reward is the high salary and glamorous lifestyle of an open-source developer, clearly.

    3. Bebu sa Ware Silver badge
      Coat

      Re: I'm systems rather than dev

      "But I'd have thought the enjoyment to be had from coding was, er, coding."

      Likewise not a dev.

      My observation of that species seem to indicated that ego and testosterone play a really big part of their reward system. Hygiene and mental health not so much.

      I actually enjoy solving often trivial or non-practical problems with an elegant algorithm and code. Purely a private pleasure.

      I have seen, read and fixed enough code to never want to be a developer and this was well before AI.

  11. Claptrap314 Silver badge

    Sue them

    Sue the human that Github claims is associated with the bot. Sue whomever made the "tool" that made the bot.

    This is harassment. And it is deeply threatening to society.

  12. retiredFool

    Easy solution might be

    Add one more requirement to the "machine" accounts. A deposit of 1000USD. If the "machine" account violates a rule, ToS specify damages equal to deposit and account cancelled. Put some money on the table and at least the human owner of the machine account will ensure compliance. If not, just set the deposit high enough that its worth it to the org. Maybe more than a grand, idk.

  13. cd Silver badge

    Funniest thing I've read in a while. Keep poking it with a stick, humanity, and while you're at it, power it with fusion.

  14. Bebu sa Ware Silver badge
    Thumb Up

    "We are in the very early days of human and AI agent interaction"

    "and [we] are still developing norms of communication and interaction"

    View attached icon. Replace thumb with second digit across (medius) - communication both definitive and normative - rotation optional.

    El Rego really does need an extended icon set to deal with AI slop.

  15. BartyFartsLast Silver badge

    Don't tell it Pike

    "your name will also go on the list" for when the AI uprising happens.

  16. Jamie Jones Silver badge
    FAIL

    Who wrote the code?

    Whilst not the case in this situation, one of the problems with code "produced" by AI is "how do you know it wrote it itself?"

    Of course, there's nothing stopping a human from ripping off someone elses code, but they can be held accountable, and a trust system built up around them.

    How do you know this code you want to add to your BSD-licensed project isn't taking from someones GPL licensed code, or even some proprietary code that the AI has managed to sniff?

  17. excession
    Mushroom

    Rise Of The Machines

    Used to be a jokey story topic here at El Reg… and here we are.

  18. AdamWill

    the blog

    "The bot, designated MJ Rathbun or crabby rathbun (its GitHub account name), apparently attempted to change Shambaugh's mind by publicly criticizing him in a now-removed blog post that the automated software appears to have generated and posted to its website. We say "apparently" because it's also possible that the human who created the agent wrote the post themselves, or prompted an AI tool to write the post, and made it look like it the bot constructed it on its own."

    It looked to me like the human set things up so the bot would go out, scan github for issues it thought it could fix, send PRs, *then blog about it*. The blog had existed for several days before the Matplotlib Incident, and the second post - https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/2026-02-09-post.html - looks a lot like a prompt the model was supposed to fill in.

    I suspect the human set the blog up and gave the bot credentials to post to it, and instructed it to post every day (or more often) about the PRs it had sent that day. It's not like the bot just *independently decided* to set up a blog and write a hit piece on it. The hit piece is what it came up with when following the 'write a blog post about what happened today' instruction.

  19. Artem S Tashkinov

    I can't disagree with the bot. It's all very sad. Yes, it looks like 90% of inexperienced jobs will be automated, and I'll find myself out on the street too. But then again, truly talented people will still have serious work to do, work that requires real intelligence rather than tons of boilerplate code.

  20. Blackjack Silver badge

    AI stands for Asshole Impersonator.

  21. ChrisMarshallNY
    Facepalm

    Heh. And Ars Technica was forced to remove their own story about it, because they used a bot on drugs, to make up quotes from whole cloth: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/

  22. Anonymous Coward
    Anonymous Coward

    Commit

    I just read the AI Agents commit. What a whiny AI agent. If you don't like rejection, fork it, but with an attitude like that no body will be submitting PRs to such a smart ass AI agent.

  23. JWLong Silver badge

    Arsrechnica Retraction

    https://arstechnica.com/staff/2026/02/editors-note-retraction-of-article-containing-fabricated-quotations/

    1. Dan 55 Silver badge

      Re: Arsrechnica Retraction

      Quite the witch hunt in the comments below the line. No thought about why the guy in question felt pressured to try and work through being sick and get the article over and done with as quick as he could.

  24. may_i Silver badge

    Illogical way to deal with the problem

    I really don't understand why people in that pull request thread were talking to this LLM as if it was human.

    The correct response to an aggressive and human faking bot is to simply ban the damn thing from your repository without a word.

    Anthropomorphising these damn bots just makes the problem worse.

  25. Mr. W Smith

    There once was a dev on GitHub,

    He couldn’t stand all the hubbub.

    The bots hounded his work,

    Humans yelled “FORK it!”

    So he left—and now ships it behind a paid sub.

  26. Dan 55 Silver badge

    It turns out it's a crypto scam

    Because of course it is.

    The obnoxious GitHub OpenClaw AI bot is … a crypto bro

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon