Ooooh, falling sales... ye wee poor sods!
So sod off. The prices you ask for graphics cards are completely out of touch with any reality (unless you smoke St Elon's weed, now that stuff will drag you some weird places, apparently)
Remember when we said a growing supply of graphics cards was worrisome because it portended a weakening economy and not just declining interest in crypto-mining? Well, Nvidia's latest earnings are here, and they show the GPU behemoth is still dealing with falling demand for graphics chips after a boom period during the first …
More accurately, as far as GPUs are concerned, there are a lot of crazy crypto investors out there that are driving demand on GPU compute time.
Capitalism may be a thing, and freedom of markets, but ultimately ponzi schemes are illegal, so there are limits to what capitalism will allow.
Yeah, and if you ask more than the market is willing to pay you are stuffed. That's the beauty of the thing. Though in this case we have a duopoly, and one could argue that some oversight would be appropriate. Abuse of market dominance and all....
“ And, as far as GPUs are concerned, there are a lot of right crazy people out there.”
I used to get crazy coming at me all the time.
“My machine is crashing when my game is running at 200fps, it’s faulty”
“Your machine is overheating, your monitor is 60hz, it can’t display 200fps, match your game frame rate to your monitor speed and it will run cooler and not crash.”
“my machine says it’s 200fps, I can’t play the game if my FPS is low, because of the latencies...I need low milliseconds!!!!!”
“For the purposes of testing, in the game settings set the maximum frame rate to 60 and tick the box titled -Sync with refresh rate, and let’s see if it stops the crashing”
“I’m not setting my FPS so low, I won’t be able to respond quickly enough when I’m playing against people running at 200fps!!!!”
“Just try it for 10 minutes”
“That’s not going to do anything, you’re no help” - Downvote
The crazy things people were coming out with when it came to GPUs made me give up volunteering on game community boards.
Community "technical support" boards are hilarious. They are full of people who'd get fired from a first line support job in five minutes for incompetence and giving stupid instructions to the end users thinking they are 3rd line support.
My personal favourite was me debugging a network issue that had been bedeviling a particular game for playing online simply because it was annoying me personally, which was caused by the game developers programming their network code assuming that only full cone NAT would be used by routers, and not making any allowance for partial cone NAT. Which in plain english means that the problem was that the NAT wasn't activated until the game client sent a packet of data to the new peer, which it didn't do because they'd assumed the new client would send packets in first. Hence, the multiplayer of the game simply didn't work properly for about a quarter of people.
I basically wrote two simple scripts to sort the problem out; describing how to change home modems from partial cone NAT to full cone NAT with an informed explanation of what this meant in terms of security, and an alternative solution which was a program that basically sniffed the unencrypted game chat for new players joining the game, obtained their IP from the network traffic and then sent a packet of data on each required port to the new IP to open up the ports for incoming data from their IP. And I explained how the game developer could resolve the issue with a patch doing exactly that themselves.
I was first told by the resident "experts" that this idle and insane theorising wouldn't work, and that the approved solution [that was known not to work] was to reformat the PC and do a new install of windows and the game client on it. I pointed out that as the problem was on the router, this was pointless. This kicked off a huge argument about how unqualified I was to comment because I didn't have 200 posts in the technical support section with my solution accepted as an answer and so I wasn't an "expert", to which I assured everybody that I made no claim to be an "expert", only an IT professional who'd identified and resolved the problem.
More and more people applied one of those solutions, posting joyously that the problem was solved, to the increasing irritation of the resident expert forum posters. The game developers then took notice and applied the obvious fix i'd suggested, and the problem went away entirely with a patch.
Which, you'd think by any standard would be a conclusive "That was the right solution then!"
Their response from the top expert forum poster was that I was pointlessly over-educated in a niche area that "nobody" needed to know. I've found that both hilarious and instructive on the level of quality assistance that you can expect from "community" type technical support boards ever since.
“Your machine is overheating, your monitor is 60hz, it can’t display 200fps, match your game frame rate to your monitor speed and it will run cooler and not crash.”
FPS issues apart, the machine shouldn't overheat. Period. If it is overheating we have a cooling problem, not a FPS one.
At least Nvidia have brought some excitement back to the GPU sector with their 4090 series. Like the thrill of knowing your connecting cable might burst into flames and burn your house down at any moment. Or not knowing whether you'll survive the heart attack when you open up your first electricity bill.
They are also using the 16pin much closer to the connector's max power.
The old 6pin Minifit connector used 3 live and 3 neutral pins with a max of 13A per pin (156W) so a total of 468W. But the PCI-e spec only allows usage up to 75W, the 8 pin had 3 live and 4 neutral so it has the same 468W total but the PCI-e spec allows it to be used up to 150W.
The 16pin microfit has 6 live and 6 neutral (+4 data lines) with a max of 9A per pin (108W) so a total of 648W, the PCI-e spec allows it to be used up to 600W.
The connector itself is "rated" for 30 plug/unplug cycles too, which doesn't exactly inspire confidence. Granted, most people won't come close to that level, but it suggests it's quite a fragile connector that they're pumping several watts through. I reckon at this point, I'd be looking for a proper 3-pin mains plug before I felt confident.
Whoever designed that power connector spec needs to go back to school to learn something.
When you are wiring up proper electricity, you learn (or find out quickly) that if something is to be pulling a continuous load, you need to be at least 30% to 40% over spec because the specs for wires and connectors are almost never marketed for 100% load 100% duty cycle.
You run into this in a hurry when doing electrical outlets and cabling / installations for charging EVs
You also then have to worry about getting enough air around the connectors and such to cool.
Anyone with a decent background in electrical engineering or proper installation / design could have taken a look at that connector and said, 600W at 12V from that? You are out of your mind.
You need at least double the carrying capacity they used, and enough room for air to get in to cool the connectors just a bit.
Had they gone with just 2 of the same high power connectors they put one of on the Quadro A6000 and similar GPUs, (which are fed by 2x 8 pin standard GPU cables if you don't have native ones), all would have been good for a long time.
Also this obsession with hugely tall cards and then power connectors on the top of them???
That of course leads to tight bends on cables for anything but the most huge monster of cases.
Would it have been so hard to put the connector on the back panel of the card like Evga was going to do?
(not on the short edge opposite the output connectors, as the cards are already insanely long).
...If you don't want to buy something you need.
I mean really! The Nvidia a100 line is over priced even for hyper-scalers (On sale now for $0.5K per GB if you buy in bulk!) and the Quadro RTX A* line is over priced for SMB's.
Look at the RTX A4500 which is about $1000 for a 20GB card that draws 200W of power. If you need the CUDA cores then you won't need the Tensor cores, but maybe the RT cores, but if you need the RT cores you either don't need the CUDA or Tensor cores because you are an SMB and have very specific goals (Recession fears?) for your IT budget. My point is that GPU's are a bit too general for the price and that any player in the GPU market needs to offer lower power consumption, specific workload functionality, a very low TCO, and a high chance of ROI. Don't forget, the gamer crowd is going to get smaller over time as the console kiddies age. Both Nvidia and AMD have been neglecting the home user/hobbyist/SOHO crowd when it comes to GPU's that are geared for ML and not games.
The market is ready. Customers have the money right now. Team Green and Team Red are deaf and blind. What about Team Blue?
"Don't forget, the gamer crowd is going to get smaller over time as the console kiddies age. "
I don't think you understand the gaming market, the reasons for wanting a general purpose computer over a console, have not changed since it was Atari 2600 vs Sinclair spectrum, 40 years ago.
Better quality graphics, pron, content creation, modding, not loosing your purchased content each generation, FPS games, and the general anti consumer/walled garden behaviour of console manufacturers will ensure the PC master race's survival for some time yet.