The Register Home Page

back to article AMD's AI director slams Claude Code for becoming dumber and lazier since last update

If you've noticed Claude Code's performance degrading to the point where you find you don't trust it to handle complicated tasks anymore, you're not alone. A GitHub issue was filed on Friday by user stellaraccident. That user's Github profile and a related LinkedIn post identify the poster as Stella Laurenzo, the director of …

  1. Anonymous Coward
    Anonymous Coward

    Nothing against Ms. Laurenzo, but "the" director of the AI group sounds like you're inflating her role a bit, given how big AMD is and how much AI they're inflicting on us all. There are probably 20 bazillion "AI groups" throughout the company, housing phalanxes of directors with backup director larvae in cryo storage. They just announced Lisa Su now has an AI Chief of Staff reporting directly to her, who presumably has her own personal swarm of directors reporting to her in turn.

    1. Anonymous Coward
      Anonymous Coward

      AMD isn't exactly the biggest AI player in the world.

      1. HCV

        Literally? No. But, c'mon. They're a 35 beellion dollar company, and the "and" for AI hardware after NVIDIA.

    2. david 12 Silver badge

      I presume that journalists are a bit better than I am at figuring this out, but yes, Laurenzo is described as "director" and "senior director" on the web, but she's clearly not a "Senior" member of the Board Of Directors, and not even a "junior" member either. And there are a large group of "vice presidents", (not sure about the overlap with company directors), and she doesn't appear to be one of those either.

      So a bit of a question mark about what her actual title and position is within AMD.

  2. cd Silver badge

    This whiffs of competitor piling-on. The Reg, being what it is, might want to know which competitors have a lighter in their pocket.

    The Jonny posts on the leaked code are funny and interesting, and have lead to non-fediverse people swamping his Masto instance. Which, according to

    https://fedipact.veganism.social/?v=2

    has 210 members.

    Quite a lot of traction from such an humble and obscure source.

    Not defending any shitbags, merely noting their apparent recognition of declining resources on the event horizon.

  3. Anonymous Coward
    Anonymous Coward

    Shirley

    It's still

    1) Get AI

    2) Profit

    Innit?

    1. Stevie Silver badge

      Re: Shirley

      1) Get AI

      2)

      3) Profit

      I reckon.

      Time to go to work, work all night.

  4. Groo The Wanderer - A Canuck Silver badge

    Ma'am, your mistake is in thinking that Claude Code or any other LLM actually thinks. They don't. They do statistical analysis and keyword grabbing to respond to your prompt using the files you've provided as a starting point for triggering the modifications.

    LLM's are NOT intelligent in the slightest. They pretend to think, but what I've seen them actually do says it's all smoke and mirrors for the investors and any sucker for a good flim-flam.

    1. Pascal Monett Silver badge
      Trollface

      Shhh.

      Don't blow investor's illusions. There's still money to made until the bubble blows.

      1. Anonymous Coward
        Anonymous Coward

        The rich will have long vacated their investments by then.

        They are currently offloading, amalgamating into financial derivative products, turning into multi-decade bonds and drawing in the last of the ‘get rich quick’ suckers.

        Tulips for all, grown in the Darien Gap..

    2. LionelB Silver badge

      That's just distraction. The real issue is not whether some AI "thinks" (whatever that means1) or is "intelligent" (whatever that means1). It's whether it is useful (to someone… like the manager looking to replace you with a cheaper option).

      1Do get back to us when you have a workable definition of what "think" and "intelligence" actually mean, what they involve, and how to recognise them reliably in practice. And do please avoid using the word "understand" (whatever that means). Best of luck – those questions have plagued philosophy and science for centuries, and there are as yet no clear-cut consensual answers. FWIW, cognitive neuroscience teaches us that human (and other animal) brains do plenty of statistical analysis and the mental equivalent of "keyword grabbing" in response to cognitive "prompts".

  5. zloturtle

    I have been using Opus 4.6 and it has been as smart or smarter as time passes by. There are many models. Claude Code as in the 'cli' is a wrapper so hard to say what his director is referring to with 'Claude Code' becoming dumber. The prompts, problems, time of day, etc. will affect responses. Could it be she is compacting her contexts often? Does she know how to persist/accumulate knowledge?

    1. munnoch Silver badge

      "time of day, etc. will affect responses"

      So, if you don't like the answer it gives you just twiddle your thumbs for a few hours and try again? I suppose it counts as "thinking" time...

    2. Andy Mac

      I used to find myself writing in all caps* to GitHub Copilot as it did stupid and blatantly wrong things. I switched to Claude Code and it was dream. Now I find myself writing all caps to CC. I don’t think my expectations have increased, rather it feels dumber.

      *Yes I know there’s no point, but it makes me feel better

  6. Not Yb Silver badge

    Not exactly unexpected.

    "This AI doesn't do well at complex engineering tasks" is a completely expected result of using an AI for something it's not going to be good at.

    It's difficult to copy something (like a new engineering solution) that doesn't exist yet.

    1. Eric 9001

      Re: Not exactly unexpected.

      It is impossible to copy something that hasn't been done before with a LLM, as the LLM can only copy the training input and combinations therefore as well as the prompt input.

      A lot of engineering solutions happen to be putting 2 or more existing programs together, which a LLM can do badly.

      But for the LLM addled, using software libraries via the API or even those difficult Unix pipes is far too much for them (LLM's seriously still output forbidden words, as the developers are too stupid to filter the output with; `I sed -E 's/(word1|word2)//g'`).

  7. weirdbeardmt

    Oxymoron

    So we’re at the point of blaming the thing we got to do our jobs - so we don’t have to - of being lazy. We all see the irony, right?

    In a bizarre parallel to the real world (a place that at least some albeit a decreasing number of us still occupy) it’s like Claude has become a bit too comfortable, a bit too 9-5 and lost the enthusiasm of the perky junior it once was. Or worse even, to borrow modern parlance, maybe it has “quiet quit”.

    And as the mere middle managers we’ve demoted ourselves to, there’s only one option… which they’ve already done.

    What a time to be…

    1. Fruit and Nutcase Silver badge
      Coat

      Stroppy teenager

      Claude's just at the stroppy teenager stage.

      Claude - Stop chatting to Alexa and do the washing up, and when you've finished that, take the dog out for a walk

  8. L3

    Slop in slop out

    Maybe engineers have stopped feeding it, now it's feeding off it's own slop.

  9. Headley_Grange Silver badge

    It's gonna get interesting. If you've got a real person who's not performing then you can get rid of them and hire someone better. If, however, you try to get rid of your AI you'll have AI's company lawyers waving the contract in your face explaining that performance was never guraranteed and you have to pay for the AI for the rest of the year whether you use it or not.

    1. retiredFool

      Ouch, meat sacks only get 2 weeks usually. That contract for AI might give as much as 11 months of severance. AI is more like the C-suite meat sacks when it comes to removal expenses then.

    2. MrAptronym

      Programmers in the US never unionized. It is a bit ironic that, by seeking to replace their workers, the bosses are now going to be paying for a service that undoubtably has more bargining power.

  10. Aladdin Sane Silver badge

    Claude cannot be trusted to perform complex engineering tasks

    FTFY

  11. Anonymous Coward
    Anonymous Coward

    Garbage in ....

    I have a goodly number of academic papers published over a 30 year period (more than one a year) in UK and US journals.

    I periodically receive invitations from obscure (mostly Chinese but some Russian & others) to submit papers for journals that, at best, exist only on a computer somewhere and have negligible, if any, real readers or subscribers. These invites always require a fee to publish your paper, usually of late $250 - $500 after which your paper is guaranteed to be published. These are known as "predatory" journals. I also get invitations to speak on a topic of my choice at fake conferences usually requiring a registration fee of $500.

    There is also a large number of faked papers doing the rounds, some of which have been published in reputable journals and had to be retracted. There are various "mills" churning out reams of fake research. In 2024, reputable journals had to retract over 10,000 fake research papers. You can look this up, the info is in the public domain.

    Given this, is it any surprise that AI is creating garbage?

    Equally to the point I believe that AI is being used to generate even more of this garbage.

    We're doomed to drown in slop.

  12. Anonymous Coward
    Anonymous Coward

    WTF is AMD using scam wear in their internal processes. are they trying to emulate Intel?

  13. ecofeco Silver badge
    Holmes

    WHOCOULDAKNOWED?!

    Dark triad billionaires fail at creating LLM/AI?

    It's A mySteRy!

    This is why LLM/AI can ONLY fail: psychotic sociopathic narcissists do not create anything useful in the end.

  14. wilhelmus7

    I reckon we're starting to see glimpses of the financial reckoning. Compute is expensive and VCs have been subsidising it for a long time. Cash is getting burned in the process and Anthropic need to throttle something. First it was making users rip through tokens faster and now its doing less and cheaper "thinking".

    Stellar pointed out she's happy to pay for proper compute so if we can start moving to a model where people actually pay what it costs to run these models we might actually start finding a workable equilibrium where people who are finding value can pay for it and the rest can carry on about their day without trying to shoehorn AI into every task.

  15. Anonymous Coward
    Anonymous Coward

    A clear case of ...

    GIGO perhaps?

    either way, avoid it entirely.

  16. Will Godfrey Silver badge
    Holmes

    Wait. What?

    I find it more concerning that anyone thought it ever was capable of effectively solving complex engineering problems.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon