Power, and stuff.
Bear in mind that GPU manufactorer recommendations for what PSU to use are very, very conservative estimates to take into account for cheap shit PSUs, or billy basic ones used by OEM/ODMs etc.
IE I have an 7800XT, which I'm sure recommends a >790w PSU or some such. Which is utter rot, on a technical level, but it's a necessary margin to take into account that not everyone has a high quality PSU, or maybe they're running four spinning disk in there that'll draw knocking on 100w at startup, etc.
I'm happily running it on a 550w PSU, because the power profile at absolute max is about as follows:
CPU - if it draws more than 90w, somethings gone badly wrong (Ryzen 7600, rated 65w but give it some margin for boosting etc)
RAM/Mobo/NVME overall: maybe ~30w or so
GPU - 300W if it spikes badly (rated for 265w IIRC, which is about what I've seen it draw when fully loaded up and benchtesting)
Throw ~20w on there for fans etc.
That's a total of ~450w if there's a major wobble while I'm fully loading the CPU and GPU at the same time with all the fans running full whack while also loading up the disk and network - for the most part, it's gonna be closer to 300w when gaming.
So I wanged a mid range, decent quality (Corsair) 550w semi-modular PSU in there, and it's been just fine.
With respect to the 4096w/512A, that's basically saying to the CPU "Draw whatever you think you can draw to run as you see fit" - the motherboard manufacturers will have only specced their power delivery for, say, 500w to the CPU on a serious overclocking board, and it doesn't appear to be the power delivery crapping out that seems to be killing these CPUs.
Lets say the CPU says "I have the thermal overhead to run 400w, so give me 400w, motherboard" and the motherboard says "tough shit, you're getting no more than 240w" - those CPUs are still dying.
That's the case of people using workstations motherboards (which have far more conservative power limits, for stability). It's not that the CPUs are being blasted with power in those cases. They're still crashing even when run on sensible power limits.
From what interested parties have seen, it's not specifically an over abundance of power delivery that's killing them, and it can't fixed with microcode - so one can only assume there's a "hard stop" problem with the manufacturing process, likely from when they started pushing the limits of what the 12th gen architecture could do, for the 13th and 14th gen - as they are refinements / very light refreshes of that architecture (more L3 cache, tuned to draw more power if it's available, etc) to try to keep up with the AMD X3D chips, which blew everyones socks off by drawing (well, being rated for from a cooling perspective - give it a 20% wiggle room) 105W and kicking in the shins of the >250w (often way over 300w) Intel offerings.
It's going to be very interesting to see what Gamers Nexus (Actually a pretty serious benchmarking channel, rather than Capital G Gaming type content) and Level1 Techs (less hardcore, but more leaning towards enteprise with consumer stuff in the mix) come up with from their respective investigations as this sounds like intel have proper "done goofed".
Steven R