back to article Game dev accuses Intel of selling ‘defective’ Raptor Lake CPUs

One game developer says it's had enough of Intel's 13th and 14th-generation Core microprocessors, calling them "defective." Australia-based indie dev studio Alderon Games made its frustrations with Intel's latest chips public in a write-up titled, "Intel is selling defective 13-14th Gen CPUs," authored by the studio's founder …

  1. Neil Barnes Silver badge
    Mushroom

    Can't help feeling

    That if someone tried to shove 512 amps through me there would be crashes, too!

    1. ExampleOne

      Re: Can't help feeling

      The article seems to be saying it allows the CPU to ask for that, not that it is putting that through the CPU unasked.

      If this is correct, than I think it is safe to say the CPU is defective. I do wonder if OEM systems exhibit the same behaviour though, because if they do, this would provide a very obvious avenue to really hurt the OEMs by burning out the CPUs while under warranty.

      1. Michael Strorm Silver badge

        Re: Can't help feeling

        I might have misunderstood, but from what I've already read about this case, I get the impression that the problem is that while they've provided specs, Intel haven't been sufficiently clear in translating these into definitive limits under which one can expect these CPUs to work reliably.

        1. W.S.Gosset Silver badge

          Re: Can't help feeling

          I read it as meaning the cpu-exogenous limits are so large (run a kettle at 500amps!?) as to be meaningless/not limits.

          Implying that the CPU design might have lucked its way through testing by always operating in constrained state on the test-bench, but the unconstrained behaviour is self-damaging/borken.

      2. iron

        Re: Can't help feeling

        The enthusiast mobos with no power limits are a red herring, these issues also affect Supermicro servers using W-series chipsets with very conservative settings.

        1. mattaw2001

          Re: Can't help feeling

          I upvoted your comment, but based on what I read there are hints that Intel's practice of approving any and all motherboard power-limits over the last five years is also causing degradation. However, this is still Intel's fault, as they have consistently approved every motherboard vendor's implementation of any & all power-limits! The motherboard vendors are particularly screwed, as the vendor who pushes the CPU harder gets better benchmark scores & sells better, so all are forced to constantly raise the power limits to be economically competitive.

      3. UnknownUnknown

        Re: Can't help feeling

        It’s a motherboard, not an electric arc blast furnace replacement.

    2. Rich 2 Silver badge

      Re: Can't help feeling

      Indeed “… in some cases MSI motherboards set a power limit of 4,096 watts and 512 amps;”

      WTF are they expecting to be powering??? An entire server farm?

      1. Martin an gof Silver badge

        Re: Can't help feeling

        WTF are they expecting to be powering???

        Knowing nothing of this sort of thing the only comment I'd make is that I don't think there is a normal, domestic socket anywhere in the world which is designed to supply over 4kW of power (4kW is - apparently - just the CPU limit set by the m/b, what about the other parts of the system, say the GPU?) so this is obviously a setting which means "take what you want". Seems daft when there obviously must be a physical limit; why not just report it accurately?

        M.

        1. heyrick Silver badge

          Re: Can't help feeling

          "I don't think there is a normal, domestic socket anywhere in the world which is designed to supply over 4kW of power"

          <glances at 20A three phase socket in the corner of the living room>

          That being said, surely it isn't asking for 4kV at 230V, it'll be 4kV at, what 1.8? 3.3? Still bloody ridiculous, mind...

      2. Snake Silver badge

        Re: 4,096 watts and 512 amps

        I *severely* doubt that MSI's voltage regulators can actually handle that kind of output without turning into molten plastics very, very quickly. So that's fraud on MSI's part, at least.

        But if motherboard manufacturers are indeed claiming to support these levels of supply outputs it could be a voltage stability issue causing these crashes

      3. Jonathan Knight

        Re: Can't help feeling

        We have a server cluster where each server can pull 5kW if it wants through 4 redundant 2.5kW power supplies. In their current configuration they are pulling 2.5kW which it distributes (roughly evenly) between the 2 CPU sockets and the 4 GPU boards as well as a little bit of memory and networking.

        So a 4kW limit on power for a motherboard isn't unreasonable, but it's probably the total for the CPU sockets rather than per CPU and reflects what the PSU can deliver if pushed.

        Jon

  2. Michael Hoffmann Silver badge

    Tech Jesus (Steve) of Gamers Nexus and Wendell of L1 Tech just covered that in a join episode. Steve indicated there is a follow up with some inside info he got.

    Their scoop is that there is something bad, as in "no firmware fix, but wholesale CPU replacement" across an entire production line.

    I can't help feeling relief that, after 25 years of stodgy sticking with Intel, I finally had made the switch to AMD. About the worst issues I had was that intractable USB issue several years ago and I no longer need a personal nuclear reactor just to power the Intel PC.

    1. An_Old_Dog Silver badge

      Intel vs AMD vs ... CPUs

      Over the years, I've had computers with Intel, NEC, Cyrix, Transmeta, VIA, and AMD x86-compatible CPUs. I've been lucky enough to have not had any CPU hardware faults.

      These days, next-process-node development requires so much money that there's little effective competition in x86-compatible CPU manufacturing. No start-up will have the needed cash.

      1. Michael Hoffmann Silver badge

        Re: Intel vs AMD vs ... CPUs

        ARM begs to differ as they - or CPUs based on the design - have been chipping away at Intel (and by extension AMD) for some years now.

        Though the various predictions I've found searching ("ARM to take 50% of notebook CPU share by 2027") I'd take with a huge chunk of sodium chloride. But, who knows!

        1. markrand
          Happy

          Re: Intel vs AMD vs ... CPUs

          My best work was done with the Motorola 68,000 series processors. It's all been downhill since.

          1. MrTuK

            Re: Intel vs AMD vs ... CPUs

            I 100% agree with you about 68K - I loved that CPU design especially programming in M/C !

          2. Terje

            Re: Intel vs AMD vs ... CPUs

            68k assembly was just so lovely

          3. Michael Hoffmann Silver badge
            Thumb Up

            Re: Intel vs AMD vs ... CPUs

            Amen, to that. Once upon a time, I wrote a graphical primitives library all in 68K Assembler, it was a joy to work with.

    2. TheRiddler

      Anecdotal I appreciate, but I've had 2 AMD 7950's X3D fail in exactly the same way in the same system in under 6 months. Burnt out in one specific area of the chip and then fails to post. Plenty of pictures online of 100's of people experiencing the exact same thing on a variety of MB's and configurations. Wish I could say I was overclocking or something but it's literally stock clocks on everything with high end components throughout (Gigabyte MB and Corsair power)

      Pretty annoyed by it :(

      1. two00lbwaster

        Isn't this the known issue with EXPO profiles and motherboards with the initial launch BIOS which set certain setting? In other words update the BIOS and don't use those settings https://letmegooglethat.com/?q=expo+settings+burning+out+amd+cpus

      2. MrTuK

        AMD didn't shy away from fixing this.

      3. Displacement Activity

        Burnt out in one specific area of the chip

        This. The article pretty much confirms that this is the issue with the Intel chips when it says that the eventual failure rate is 100%; failures increase over time. The 4KW thing is a red herring. The actual die power consumption is a function of the frequency, voltage, and capacitance driven; chips like these are very carefully designed to power up specific sections only when required, and to limit the frequency and voltage to keep the die temperature to an acceptable level. The problem is that the tools which predict temperature distribution across the die are not very good, and you can never be sure whether or not the MB has adequate heatsinks, and you can never be entirely sure when silicon on a new process will fail. Eventually, some part of the chip will pop, unless you're very conservative. Microcode is going to be a very blunt instrument for controlling this.

      4. maffski

        This was due to a specific failure in the AGESA code for the motherboard BIOS. Ensure your motherboard is running the latest BIOS and potential overvolt that causes it should be blocked.

  3. mtrantalainen

    If the motherboard advertises 4096 W and 512 A and cannot deliver that for real, I think it's perfectly fine for the CPU to crash if it tries to use lots of power. Voltage ripple effects will make any CPU unstable.

    And you would need a really fast oscilloscope to verify the actual performance of the motherboard power delivery so you basically have to trust the motherboard manufacturer claims.

    And it doesn't help that RAM manufacturers often advertise timings that do not actually work on all cases either. For example, the rowhammer attack shouldn't be possible with correctly set timings but many RAM sticks are vulnerable because manufacturers advertise timings that appear to mostly work.

    Hardware manufacturers must stop lying about their products. At start, it was only GPU manufacturers telling that their card uses "180 W" of power but you still needed to get 750 W PSU as the minimum requirement! But now more and more manufacturers are selling imaginary specs and the product fails if you actually try to use the advertised specs.

    1. Steven Raith

      Power, and stuff.

      Bear in mind that GPU manufactorer recommendations for what PSU to use are very, very conservative estimates to take into account for cheap shit PSUs, or billy basic ones used by OEM/ODMs etc.

      IE I have an 7800XT, which I'm sure recommends a >790w PSU or some such. Which is utter rot, on a technical level, but it's a necessary margin to take into account that not everyone has a high quality PSU, or maybe they're running four spinning disk in there that'll draw knocking on 100w at startup, etc.

      I'm happily running it on a 550w PSU, because the power profile at absolute max is about as follows:

      CPU - if it draws more than 90w, somethings gone badly wrong (Ryzen 7600, rated 65w but give it some margin for boosting etc)

      RAM/Mobo/NVME overall: maybe ~30w or so

      GPU - 300W if it spikes badly (rated for 265w IIRC, which is about what I've seen it draw when fully loaded up and benchtesting)

      Throw ~20w on there for fans etc.

      That's a total of ~450w if there's a major wobble while I'm fully loading the CPU and GPU at the same time with all the fans running full whack while also loading up the disk and network - for the most part, it's gonna be closer to 300w when gaming.

      So I wanged a mid range, decent quality (Corsair) 550w semi-modular PSU in there, and it's been just fine.

      With respect to the 4096w/512A, that's basically saying to the CPU "Draw whatever you think you can draw to run as you see fit" - the motherboard manufacturers will have only specced their power delivery for, say, 500w to the CPU on a serious overclocking board, and it doesn't appear to be the power delivery crapping out that seems to be killing these CPUs.

      Lets say the CPU says "I have the thermal overhead to run 400w, so give me 400w, motherboard" and the motherboard says "tough shit, you're getting no more than 240w" - those CPUs are still dying.

      That's the case of people using workstations motherboards (which have far more conservative power limits, for stability). It's not that the CPUs are being blasted with power in those cases. They're still crashing even when run on sensible power limits.

      From what interested parties have seen, it's not specifically an over abundance of power delivery that's killing them, and it can't fixed with microcode - so one can only assume there's a "hard stop" problem with the manufacturing process, likely from when they started pushing the limits of what the 12th gen architecture could do, for the 13th and 14th gen - as they are refinements / very light refreshes of that architecture (more L3 cache, tuned to draw more power if it's available, etc) to try to keep up with the AMD X3D chips, which blew everyones socks off by drawing (well, being rated for from a cooling perspective - give it a 20% wiggle room) 105W and kicking in the shins of the >250w (often way over 300w) Intel offerings.

      It's going to be very interesting to see what Gamers Nexus (Actually a pretty serious benchmarking channel, rather than Capital G Gaming type content) and Level1 Techs (less hardcore, but more leaning towards enteprise with consumer stuff in the mix) come up with from their respective investigations as this sounds like intel have proper "done goofed".

      Steven R

      1. Michael Strorm Silver badge

        Re: Power, and stuff.

        > very, very conservative estimates to take into account for cheap shit PSUs

        Indeed. I don't know much about PSUs, but I *do* know that the one thing about dirt-cheap, no-name models is that you'd be very foolish to rely upon them being able to deliver the specified maximum power, at least reliably and consistently over an extended period of time. I've heard horror stories about some catching fire when pushed to do so.

        I suspect that a 500W power supply from an even half-decent "name" manufacturer is going to cost a lot more than a bottom-of-the-range, no-name 750W model, but I know which one I'd trust more to run the same machine.

        1. IvyKing

          Re: Power, and stuff.

          I've seen a number of PSU's in circa 2000 Dell computers fail after emitting the magic smoke, so the problem with PSU's was not limited to "no-name models".

          I'm suspecting that there is a thermal issue that Intel glossed over, local heating of the die can slow down the logic elements, which could then lead to timing glitches causing the crash. I'm also wondering if the new process nodes are punting stricter limits on maximum junction temperatures to prevent diffusion of the N and P dopants.

          1. MrTuK

            Re: Power, and stuff.

            Are you insinuating that Intel F3CKED UP ?

            1. IvyKing

              Re: Power, and stuff.

              Pretty much so. It sounds like they were pushing the process a bit too hard.

          2. anonymous boring coward Silver badge

            Re: Power, and stuff.

            The cheap internals of Dell's etc are probably no-name. How else would they make money?

      2. This post has been deleted by its author

    2. JoeCool Silver badge

      not to dampen a smoking rant,

      But on the psu mismatch problem, power supplies are rated for the cumulative output across all voltages 3,5,12. The video card is probably just using one voltage, maybe the +12v.

      So a supporting ps needs to deliver 180 Watts at 12v not 180w total.

  4. biddibiddibiddibiddi

    Why are they running their "servers" on desktop CPUs? This writing seems to just be an attempt to get their company name some publicity.

    1. Steven Raith

      Game devs and publishers often have racks of consumer CPU'd systems running workstation class boards for realistic QA testing, and some use them for hosting remote game servers etc - having high speed single thread performance makes a difference for those.

      You could run them on Xeons, but the games themselves aren't designed to run on a massively multicore, relatively low speed CPUs so they aren't as well suited for it.

      Don't get me wrong, it's pretty niche so you might not be familiar with it, but it's absolutely a thing.

      Steven R

      1. MrTuK

        Now to be replaced with AMD 7950X systems and soon with the faster single core 9000 series looks like Intel has scored an own goal here but it will all be ok with Intel's Next gen won't it !

  5. Michael Strorm Silver badge

    "working on the multiplayer dinosaur survival game Path of Titans"

    No point squandering a legitimate excuse to mention your company's product, I suppose, but I wasn't even aware that "multiplayer dinosaur survival game" was even a genre...!

    How does that even work? Do you have to avoid the doomsday asteroid heading for your home in Mexico by organising a plane trip to Europe then finding enough for you and your descendants to eat under hostile environmental conditions for the next several million years?

    Having ensured your descendants' survival into the modern era, do they become the co-stars of an infamous dinosaur/human buddy cop film starring Whoopi Goldberg?

    And why would you call such a game "Path of Titans" which doesn't even begin to hint at "dinosaur survival" and sounds like the most generic, play-to-win freemium game title ever?

    1. Richard 12 Silver badge

      Re: "working on the multiplayer dinosaur survival game Path of Titans"

      It's a surprisingly large genre.

      Or maybe not, given how many Jurassic Park films there are.

      1. Michael Strorm Silver badge

        Re: "working on the multiplayer dinosaur survival game Path of Titans"

        Ah, I'd forgotten about Jurassic Park. It seemed more obvious once you mentioned that, and I thought briefly that I'd been stupid for misinterpreting "dinosaur survival" as meaning you were trying to survive *as* a dinosaur rather than a person trying to escape from them.

        But then I checked the game's website and it turns out that, no, you *are* playing as a dinosaur and I was right in the first place.

        Weird.

        1. Andy Non Silver badge
          Coat

          Re: "working on the multiplayer dinosaur survival game Path of Titans"

          "you *are* playing as a dinosaur"

          The problem is right there, if you are a T-Rex how are you going to manage the controller with those tiny arms.

          1. anonymous boring coward Silver badge

            Re: "working on the multiplayer dinosaur survival game Path of Titans"

            Massive arms compared to a human. Bur a bit disproportionate and inconveniently located perhaps for tech work.

    2. Throatwarbler Mangrove Silver badge
      Joke

      Re: "working on the multiplayer dinosaur survival game Path of Titans"

      Clearly the problem is that the game is meant to work only with Meteor Lake CPUs.

      1. MrTuK

        Re: "working on the multiplayer dinosaur survival game Path of Titans"

        nope, 12th Gen !

        1. SuperGeek

          Re: "working on the multiplayer dinosaur survival game Path of Titans"

          Whooooosh! The joke went right over your head. Dinosaurs? Meteors? The dinosaurs were made extinct by meteors? Ah, never mind. It isn't funny when you have to explain it!

  6. Wolfclaw
    Mushroom

    Intel selling broken CPU's, then try their best to cover it up, not like they haven't done that before!

    1. Michael Strorm Silver badge

      They've done so on at least 0.9999999999832 previous occasions.

      1. Anonymous Coward
        Anonymous Coward

        Don't Divide, Intel Inside!

    2. waldo kitty
      Boffin

      Intel selling broken CPU's, then try their best to cover it up, not like they haven't done that before!

      can we say Celeron? ;)

      1. Solviva

        There's broken and broken. Celerons were (initially) fully working CPUs identical to the PII but where the on-slot cache had failed and thus was disabled. Result being a lesser CPU due to lack of cache but 100% functional, and priced accordingly.

  7. ChoHag Silver badge
    Coat

    You just can't count on Intel.

  8. Bebu Silver badge
    Windows

    When zero isn't zero I suspect.

    4096 W and 512 A.

    My guess these limits are stored in 12 and 9 bits.

    Zero Watts or Ampères don't make at a lot of sense here so zero probably means no limit or the maximum representable value + 1. ie (212-1)+1 and (29-1)+1.

    Appears these chips weren't given enough magic smoke or the wrong coloured smoke so fiddling with overclocking setting and microcode updates isn't going to fix this. Everyone knows what happens when you let the smoke out of an electronic component. :)

    I like the developers' touch of having their game pop up a dialog when running on these CPUs:

    [ Sorry mate. You went shit* with your CPU. :((. ]

    * Unsurprisingly the branding (shiteinside)® is not yet a thing despite near universal enshittification.

  9. Ilgaz

    There is another "proof"

    It looks like an alpha version game also have problem with the exact same generations. "Once Human" from Starry Studio. Unfortunately people can't imagine a "buggy CPU" so they blame the application.

  10. MrTuK

    Intel, Intel OMG Intel !

    Being an AMD AM5 Desktop user (7950X using Linux of course) I have been following this situation with 13th/14th K/KS models with amusement, yes amusement. Intel have always pushed the envelope because they get the advertising / positive propaganda. Anyway my 2 pence worth concerning this situation is Intel you are F3CKED this time. I personally think Intel knows what is wrong with the the 13th/14th Gen being potentially the glue that you are using metaphorically speaking to hold them together. And unless Intel stands up and replaces all 13th/14th Gen CPU's immediately that a customer requests one then I can see a class action lawsuit coming its way. The shit is about to hit the fan and it is a doozy, so god help the fan let alone Intel !

    What is super bad news for Intel is that would a user that has had a defective Intel 13th/14th CPU now want an untested for 6 months next Gen Intel CPU rather than an AMD 7000/9000 one, maybe they will demand their m/b and CPU to be replaced with the upcoming Intel Next Gen CPU's or maybe 100% money back so that they can purchase an AMD m/b and CPU !

    All I can say is the quicker Intel jumps on this to resolve it the less future hassle it is gonna get and its gonna need might deep, deep pockets to keep everyone happy !

  11. razorfishsl

    it's all nonsense, current is limited by resistance and voltage....

    my kettle is connected to the national grid, which can supply thousands of amps, but i don't need to program my kettle to not use that current...

    The reality is Intel F**ed up... they are allowing the microcode in their CPu's to pull more current than the silicon can handle.

    it's the same misinformation and pisspoor designs they have continually done.

    like saying their CPU can run at "XYZ Ghz" , it's all nonsense, becasue as soon as you try to meet that limit , all the other cores scale back, so maybe a limited number can run close to that speed.

    Then you actually look at what they classify as "GHZ" and it's not actually GHZ speed, it's mult threading that gives a similar speed to what might be seen at that speed , when using manipulated code

  12. Binraider Silver badge

    Obligatory x86's days are numbered thought. AMD have the edge for now, but the opportunities to improve are few and far between. Transistors are more or less on the edge of the limits imposed by quantum tunnelling; and recent performance gains have mostly came from being clever in how one physically builds and arranges the transistors.

    The diminishing returns of throwing more power at 6502, Z80, 68000, power, and endless other architectures eventually all prompt a rethink - and why should x86 be any different in that regard?

    The ARM architecture is probably the best known one, though we generally haven't seen the bleeding edge of manufacturing applied to it. There's probably a place for it; especially to cut datacentre power usage.

    And then there are other concepts like ternary processors (allowing bits to be -1, 0 or 1) rather than having to have hardware logic and effort expended on flagging and dealing with things as positive or negative.

  13. purpleduggy

    i have both a Ryzen 9 7950X and i9-14900K. Both are rocksolid platforms on any app or game if run on stock profiles. Most of these issues are bad motherboard BIOS profiles. This generation many of the motherboard manufacturers have had a massive drop in quality. I want to see AMD and Intel make their own motherboards because I no longer trust the Asus/MSI/Asrock/Gigabyte motherboard cartel to make quality motherboards. Their BIOSs now run with extreme overclocks by default with extreme difficulty to turn off these profiles. Also Windows 11 now enforces weird power saving kernel level rules that makes many applications react in strange ways, often boosting the first core to maximum especially if default overclock bios profile is on, which causes the app to crash, just run windows 10, and build a stock profile with extreme boosting disabled.

    1. Binraider Silver badge

      Fair. The plethora of options is that large it’s difficult to know what to flip. I would not know who to pick from for a mobo today; having been an ASUS convert for over a decade. Their recent shenanigans with RMA (see GamersNexus) have earned them the NOPE award.

      I spent considerable time plotting out what bios options worked well; the defaults of enable XMP got so much wrong it was painful to see, and unstable. Memory timings, fclk, voltages all badly off. $deity help anyone that doesn’t know what they are doing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like