Nothing against Ms. Laurenzo, but "the" director of the AI group sounds like you're inflating her role a bit, given how big AMD is and how much AI they're inflicting on us all. There are probably 20 bazillion "AI groups" throughout the company, housing phalanxes of directors with backup director larvae in cryo storage. They just announced Lisa Su now has an AI Chief of Staff reporting directly to her, who presumably has her own personal swarm of directors reporting to her in turn.
AMD's AI director slams Claude Code for becoming dumber and lazier since last update
If you've noticed Claude Code's performance degrading to the point where you find you don't trust it to handle complicated tasks anymore, you're not alone. A GitHub issue was filed on Friday by user stellaraccident. That user's Github profile and a related LinkedIn post identify the poster as Stella Laurenzo, the director of …
COMMENTS
-
-
Wednesday 8th April 2026 00:32 GMT david 12
I presume that journalists are a bit better than I am at figuring this out, but yes, Laurenzo is described as "director" and "senior director" on the web, but she's clearly not a "Senior" member of the Board Of Directors, and not even a "junior" member either. And there are a large group of "vice presidents", (not sure about the overlap with company directors), and she doesn't appear to be one of those either.
So a bit of a question mark about what her actual title and position is within AMD.
-
Monday 6th April 2026 21:24 GMT cd
This whiffs of competitor piling-on. The Reg, being what it is, might want to know which competitors have a lighter in their pocket.
The Jonny posts on the leaked code are funny and interesting, and have lead to non-fediverse people swamping his Masto instance. Which, according to
https://fedipact.veganism.social/?v=2
has 210 members.
Quite a lot of traction from such an humble and obscure source.
Not defending any shitbags, merely noting their apparent recognition of declining resources on the event horizon.
-
Monday 6th April 2026 21:35 GMT Groo The Wanderer - A Canuck
Ma'am, your mistake is in thinking that Claude Code or any other LLM actually thinks. They don't. They do statistical analysis and keyword grabbing to respond to your prompt using the files you've provided as a starting point for triggering the modifications.
LLM's are NOT intelligent in the slightest. They pretend to think, but what I've seen them actually do says it's all smoke and mirrors for the investors and any sucker for a good flim-flam.
-
Thursday 9th April 2026 13:18 GMT LionelB
That's just distraction. The real issue is not whether some AI "thinks" (whatever that means1) or is "intelligent" (whatever that means1). It's whether it is useful (to someone… like the manager looking to replace you with a cheaper option).
1Do get back to us when you have a workable definition of what "think" and "intelligence" actually mean, what they involve, and how to recognise them reliably in practice. And do please avoid using the word "understand" (whatever that means). Best of luck – those questions have plagued philosophy and science for centuries, and there are as yet no clear-cut consensual answers. FWIW, cognitive neuroscience teaches us that human (and other animal) brains do plenty of statistical analysis and the mental equivalent of "keyword grabbing" in response to cognitive "prompts".
-
Monday 6th April 2026 23:21 GMT zloturtle
I have been using Opus 4.6 and it has been as smart or smarter as time passes by. There are many models. Claude Code as in the 'cli' is a wrapper so hard to say what his director is referring to with 'Claude Code' becoming dumber. The prompts, problems, time of day, etc. will affect responses. Could it be she is compacting her contexts often? Does she know how to persist/accumulate knowledge?
-
Thursday 9th April 2026 11:25 GMT Andy Mac
I used to find myself writing in all caps* to GitHub Copilot as it did stupid and blatantly wrong things. I switched to Claude Code and it was dream. Now I find myself writing all caps to CC. I don’t think my expectations have increased, rather it feels dumber.
*Yes I know there’s no point, but it makes me feel better
-
-
Thursday 9th April 2026 00:41 GMT Eric 9001
Re: Not exactly unexpected.
It is impossible to copy something that hasn't been done before with a LLM, as the LLM can only copy the training input and combinations therefore as well as the prompt input.
A lot of engineering solutions happen to be putting 2 or more existing programs together, which a LLM can do badly.
But for the LLM addled, using software libraries via the API or even those difficult Unix pipes is far too much for them (LLM's seriously still output forbidden words, as the developers are too stupid to filter the output with; `I sed -E 's/(word1|word2)//g'`).
-
-
Tuesday 7th April 2026 04:04 GMT weirdbeardmt
Oxymoron
So we’re at the point of blaming the thing we got to do our jobs - so we don’t have to - of being lazy. We all see the irony, right?
In a bizarre parallel to the real world (a place that at least some albeit a decreasing number of us still occupy) it’s like Claude has become a bit too comfortable, a bit too 9-5 and lost the enthusiasm of the perky junior it once was. Or worse even, to borrow modern parlance, maybe it has “quiet quit”.
And as the mere middle managers we’ve demoted ourselves to, there’s only one option… which they’ve already done.
What a time to be…
-
Tuesday 7th April 2026 06:40 GMT Headley_Grange
It's gonna get interesting. If you've got a real person who's not performing then you can get rid of them and hire someone better. If, however, you try to get rid of your AI you'll have AI's company lawyers waving the contract in your face explaining that performance was never guraranteed and you have to pay for the AI for the rest of the year whether you use it or not.
-
Tuesday 7th April 2026 16:23 GMT Anonymous Coward
Garbage in ....
I have a goodly number of academic papers published over a 30 year period (more than one a year) in UK and US journals.
I periodically receive invitations from obscure (mostly Chinese but some Russian & others) to submit papers for journals that, at best, exist only on a computer somewhere and have negligible, if any, real readers or subscribers. These invites always require a fee to publish your paper, usually of late $250 - $500 after which your paper is guaranteed to be published. These are known as "predatory" journals. I also get invitations to speak on a topic of my choice at fake conferences usually requiring a registration fee of $500.
There is also a large number of faked papers doing the rounds, some of which have been published in reputable journals and had to be retracted. There are various "mills" churning out reams of fake research. In 2024, reputable journals had to retract over 10,000 fake research papers. You can look this up, the info is in the public domain.
Given this, is it any surprise that AI is creating garbage?
Equally to the point I believe that AI is being used to generate even more of this garbage.
We're doomed to drown in slop.
-
Wednesday 8th April 2026 23:06 GMT wilhelmus7
I reckon we're starting to see glimpses of the financial reckoning. Compute is expensive and VCs have been subsidising it for a long time. Cash is getting burned in the process and Anthropic need to throttle something. First it was making users rip through tokens faster and now its doing less and cheaper "thinking".
Stellar pointed out she's happy to pay for proper compute so if we can start moving to a model where people actually pay what it costs to run these models we might actually start finding a workable equilibrium where people who are finding value can pay for it and the rest can carry on about their day without trying to shoehorn AI into every task.