Sure blame users ..
instead of biaming users, saying something like " oh ain't foolproof eh ? let's redo this .. " is way more productive and confidence inspiring.
Nvidia and other GPU makers have been urged to "ensure end user safety" by the consortium that created the specification for the 12VHPWR connector used in Nvidia's GeForce RTX 4090, which has been the subject of multiple complaints of melting cables. The consortium, PCI-SIG, said it issued the recent statement to member …
Yeah but, it's also work, plus admission of blame.
When you have major companies paying hundreds of millions in settlement fines and going before the press proudly stating "we admit no wrongdoing", you can hardly expect a standards body to admit that it fouled up when it's the others making the cables following the standards who can be blamed.
This post has been deleted by its author
Not only are the sense pins too long (so the card thinks the connectors is fully inserted when it isn't), the force required to insert it is too high AND the latching mechanism doesn't click when it latches. MicroFit is just a bad connector for consumers.
A 10 pin MiniFit connector would support a higher wattage and be easier for consumers to install.
Are you sure they will be better? The Molex page for the Mini-Fit line
https://www.molex.com/molex/products/family/minifit_power_connector_solutions
states "up to 13.0A".
Note that I am *not* a trained electrical engineer, so I'm BOUND to be flamed for this. But if you look at the specifications on the Mini-Fit system
https://www.molex.com/pdm_docs/ps/PS-45750-001-001.pdf
(page 9)
each wire-to-board circuit rating whilst using 16g wire is only 8.5A max. Therefore to get 600W @ 12V via 8.5A @ max circuits requires 6 circuits, and that's at maximum, read optimized and ideal, rating. So a 12-conductor Mini-Fit will be taken up exclusively with the 12V positive and ground circuits, never mind sensing and any other signals desired across the connector.
IMHO the entire plan is flawed. Using these micro-style pin connectors to handle 600W @ 12V is a long-term design mistake; a larger connector should have been specified regardless of the board space used. IMHO they were thinking of optimum packaging utilization to keep the board designers happy but left not a lot of room for overload due to a variety of factors, or even long-term increase in connector resistance due to surface oxidation.
"WARNING! Ensure the connector is fully plugged in. If it is not pushed fully home it may burst into flames when powered up."
"Well, that's what the user guide said m'Lud."
"And did the user Read The Fucking Manual?" remarked an unusually informed Court.
No, m'Lud."
"Was the user obliged by the purchase contract to do so, and informed of this obligation before purchase?"
"No, m'Lud."
"Seems to me that this is much like my lunchbox - an open and shut case. Which reminds me, case adjourned until 2:00 PM."
If you wanted to design a power connection standard to carry more power, the sensible answer is to boost up the voltage, not the current..
The USB-IF managed to get this right, so why couldn't PCI-SIG? If I can have 48V on a USB-C cable delivering 235W at 5A, why can't I have it inside my PC, delivering 600W at 13A? (which would only require a single 6-pin connector at most)
Or if we must stick with 12V @ 50A, why keep the crummy Molex multi-pin crimp connectors instead of moving to something like XT90? (probably, because PCI-SIG has a deal with Molex)
I think PCI-SIG feels the need to cover its ass here, because it knows this is it's fault for being a) a dinosaur unwilling to move on from 1970s IBM standards, and b) a parasitic entity that receives millions of dollars in subscription fees from the IT industry and does no work for it. (unless cocktails and canapes on a luxury yacht count as work)
But NVidia are idiots too, of course. I'd guess their employees probably wrote the spec anyway and sent it to PCI-SIG for ratification.
You are absolutely correct. Not only would the connectors be seeing less per-pin current, the wires would have much lower I^2*R losses (which is the reason the wires heat up). Note that the current parameter is squared so any increase in current has a power of 2 factor on the losses (i.e., heat). Internally, the video card primarily uses 3.3V logic with probably 1.2V processing core voltage, so they have to have Point-of-Load regulators anyway. The DC/DC Converters used as the POL regulators can most likely handle a higher input voltage without modification.
The only problem is that customers then have no choice but to purchase a new power supply that would supply the higher voltage and I'm sure the marketing people didn't like that idea.
Sure, but people always forget the transition to a higher voltage _solely_ to lower the AWG is going to cost more $$ than typically expected/desired, especially since an entirely different supply is needed and even then you have to convert it back to 12v unless you redesign the GPU to use larger more costly components to handle the 24v or 48v.
A better answer would be to offboard the GPU entirely to an external device or child board similar to a SAS expander or whatever. Of course the footprint would increase a little, but these cards are huge anyways :-/
This is why we shouldn't let marketing make technical decisions.
If I want to run a RTX 4090 I'd need a new PSU anyway. Also, it's not beyond the Wit of Engineers to design a 3.3V POL regulator that can accept 10V-50V wide input range and therefore keep compatibility with older PSUs (although you'd want a different connector with a series diode or two to prevent people trying to put 48V into a 12V card.. and you can always have an ID pin that handshakes to the PSU before enabling the 48V, like how USB does it). Mostly the POL design is the same, with higher voltage rated front-end caps and MOSFETs. A few $ more BoM cost for nVidia on their $1000 card..
Maybe if you calculated how much *energy* was lost in the cables and how much extra *copper* is required to run at 12V, and told them that they were therefore killing the planet, they might change their minds.. Actually a "95% ErP rating" shouldn't be given to a PSU if it burns tens of watts in the output cabling.
Interesting, I didn't get that from the headline at all:
"Creator of spec for melting RTX 4090 cables urges Nvidia, others to 'ensure user safety'"
I always thought Nvidia was the manufacturer? Sure, previous reports (listed in the article) stated that Nvidia blame end users, but from this I took it that that body who came up with the spec in the first place is telling manufacturers it's their responsibility?
Now, don't get me wrong, I'm the first to criticise the americanisation of the register (lower case all intentional), the shooing away of Dabbsy and them generally getting rid of all that made this publication unique, but in this case I think the headline is fair enough.
Steve over at the Gamers Nexus youtube channel needs to be given some credit I think for this.
They did quite a thorough dive into the cause of the issue, and their video on it makes for quite interesting watching... Literally the day or two after it was posted... nVidia point to it and say 'yeah, that's exactly what we found too'
Given near nVidia's silence on the subject up to that point, I was half expecting them to try and blame the user... when that's not the whole problem at all.
Crappy design decisions, allow the card to be powered without the connector properly inserted and with such a small connector, and inadequate 'clip' it was a problem waiting to happen... and shows that quality testing was also inadequate.
It is easier to just sit and continuously press f5 until someone comes up with solution. Then just quickly react with official statement "yeah that is what we also found".
Testing lab resources are not cheap and one has to cheap out somewhere if they first cheap out with pre-launch testing and it might get expensive.
"seeks damages from Nvidia for alleged fraud, among other charges."
Fraud? I haven't seen the 4090 packaging that close, but I don't think it says "connectors guaranteed to not melt", I'm not sure how they committed fraud.
(Seriously, those suing NVidia do have a decent case to at least get a replacement card or something; but I think they have 0 case in claiming fraud.)