Sounds like good news....
For Gamers and Game-Designers etc???
(No more GPU inflated coin mining prices)
Oh wait that use is excluded, sounds suspect....
Nvidia has banned the use of its GeForce and Titan gaming graphics cards in data centers – forcing organizations to fork out for more expensive gear, like its latest Tesla V100 chips. The chip-design giant updated its GeForce and Titan software licensing in the past few days, adding a new clause that reads: “No Datacenter …
As I understand it, Bitcoin has long reached the point where you need dedicated ASICs to turn a mining profit these days, and the newer e-coins use proof-of-work algorithms that aren't GPU-friendly, meaning it's better done on the CPU instead, specifically to control this kind of exploitation.
" e-coins use proof-of-work algorithms that aren't GPU-friendly"
Eh, no. More the other way around. The PoW coins are almost all set to use GPU/CPUs and not ASICs, and often have a hard fork planned if they do become mineable on an ASIC. The PoS coins are the anti-miner crowd, PoW pretty much requires miners.
While some in the crypto community don't like miners, some groups actively encourage them. It depends upon whom you wish to depend upon for your processing, and how distributed you would like it to be.
Even the ones aimed at being only viable on CPUs are able to be mined on a GPU. That's the biggest appeal of the Vega's, that they can crack 2k hash on monero.
I'm curious where el reg can get a 1080t for those prices.
As for the article, surely the way around this is to add the word "blockchain" to any process that you're running. Or add a low intensity mining process, so you're doing blockchain processing as well as whatever nVidia wants you to do one their super expensive GPU.
It's the same crap as when a "workstation" GPU costs five times the equivalent of a consumer grade one, despite being having the same innards.
Yes, if you look up the mining profitability calculator on Nicehash, you'll see that a single Bitmain Antminer S9 could potentially(!) earn you about £600 a month, whereas the next most profitable GPU hardware (an Nvidia GTX 1080 Ti) will only yield about £160 a month.
Then again, you could probably knock together a mining rig with a 1080 Ti for less than a grand, whereas the Antminer costs three times as much, however it does earn you four times the profit, so overall it makes more sense.
As for Nvidia's policy, IMO (IANAL) EULAs in general are legally unenforceable, and the "if Nvidia discovers them" caveat will put the kibosh on any extralegal enforcement too, so overall this policy is bollocks, if you ask me. It also does nothing for gamers, as the policy explicitly allows bitmining anyway, so GPU prices will continue to be artificially inflated.
Personally, I'm starting to think that if you can't beat 'em, join 'em, as at least that way I'll be making money rather than spending it.
Now, if someone would just kindly donate three grand to get me started...
Does that mean that I can't use one of their cards to drive my big Nagios status display boards in my server room because it is in a "data center"?
What's next? Adding specific game titles to the EULA and making them pay extra for certain titles? I can just see them now -- "Well, we never designed them to run XXXX and we should think that you should be running our YYYYY product instead."
And what makes commercial blockchain mining use exempt from any other commercial use?
Is the stack of computers next to me any different than the rack mounted machines in the next room?
If I take this workstation which has an Nvidia card and 5 directly attached monitors and push it into the next room, am I going to be violating the license because it is now in a data center? Am I already violating it because I share the same room and front door of the data center?
I realize that these are a bunch of "stupid" questions, but they are in response to an equally stupid stipulation.
They build the hardware, provide a driver to access the hardware, and they really should NOT be able to tell me what I can do with it after they have accepted my money. This isn't a "Qualcomm" scenario where some patented software was burned into ROM and people were supposed to pay extra to use that part of the chip -- a gray area in and of itself -- in this case the driver is separate from the physical hardware product, but required in order to utilize the hardware.
It's one thing if someone buys software with the provision that it not be used for commercial purposes (aka academic licensing), but if I walk into a store and buy a card I should be able to use it for commercial purposes, and it shouldn't matter whether I am doing it at home, in my office, or in a data center somewhere.
It has been my contention that firmware should be open source. Corporations have intellectual property rights, patents, and copy right protections to protect them. The firmware should be considered part of the hardware. Nvidia should send out a warning that the consumer versions of their GPUs may not stand up data center usage, if they believe they might not, but should definitely not be able to sue anyone who has bought them for that usage.
Certainly within the UK Nvidia cannot retrospectively remove these cards from data center use by changing the EULA at a later date. Whilst they can stop you from using subsequent driver releases after the EULA has changed, the doctrine of first sale protects anyone that already has the GPUs in place and the earlier version of the driver installed. In fact I believe there would be no legal recourse for Nvidia if someone was to continue to purchase and use the cards in a data center as long as they only used the driver release from before the EULA change.
I also believe that Nvidia could be opening themselves up to legal action if they refused to allow continued driver updates for those people that already have them in place prior to the change. Any driver update with “bug fixes” is an admission the previous version of the driver contains faults, and as such brings concepts such as “not fit for purpose” into play allowing for claims under warranty.
In the US however YMVV.
While I commend your maximalist interpretation of the legal situation, I'd advise against such optimistic assessment.
All it takes is for the shadow of legal complications down the line, for purchasing decisions to be swayed one way or the other. You may even be correct in what you write but try preemptively convincing a purchasing guy that "it will all work out in the end" and that "that's nothing to worry about" and see what reaction you get.
All Nvidia cares about is discouraging widespread substitution and cannibalisation of its product lines. This change achieves that with the minimum of fuss or effort. Unless you seriously think they're going to devote funds, time and people chasing the odd researcher.
TL;DR: If you're small fry it doesn't affect you, but if you intended on making a living out of it, then be aware of the possible dangers. And if you're a large corp, don't entertain thoughts of doing things "on the cheap".
It's already the case
http://www.nvidia.com/object/manufacturer_warranty.html
select titan/geforce etc...
Warranted Product is intended for consumer end user purposes only, and is not intended for datacenter use and/or GPU cluster commercial deployments ("Enterprise Use"). Any use of Warranted Product for Enterprise Use shall void this warranty.
It's already the case
http://www.nvidia.com/object/manufacturer_warranty.html
select titan/geforce etc...
Warranted Product is intended for consumer end user purposes only, and is not intended for datacenter use and/or GPU cluster commercial deployments ("Enterprise Use"). Any use of Warranted Product for Enterprise Use shall void this warranty.
This article is not about warranty, it's about copyright licensing and enforcement.
Nvidia have noticed how companies like Oracle and Microsoft rake in huge amounts of money based on software licensing: e.g.
- you can pay "x" to run this software on a 1 CPU / 1 core processor, but you must pay "8x" for permission to run this on a 2 CPU / 4 core processor
- you can pay "y" to run this for home use, but you must pay "4y" to run this in your business
- they can set whatever terms they like; they can audit you; and if you are found to be breaking the terms, the law lets them take you to court and get punitive damages. (Meanwhile the EULA says the software carries is no warranty whatsoever; not even fitness for purpose).
Now Nvidia have gotten envious. They want you to pay "z" for using their graphics cards for home gaming, but "10z" to use them for business computations.
There is a long history of copyright licensing being used in this way - e.g. you can "buy" a CD or DVD in a shop, but you don't get permission to use it in clubs or schools unless you pay more.
In some ways I can see Nvidia's point. For all the engineering complexity of building a high performance computation device, they get little protection except short-term patents, and only if they have done something novel. But for the relatively minor job of writing a driver to hook it to your PC, this supposedly "artistic" work gets much longer protection (90 or 110 years?) and a huge amount of leverage over how the product is used.
Since everything from a toaster to a car has software in it now, this is another reason why copyright is long overdue for an overhaul in the *consumers* favour.
Bohhooohoo nvidia gpus aren't artistic work.
What are you talking about. Nvidia are doing *fine*.
Couple of things, using copyright to enforce licensing clauses that don't pass the laugh test that directly relate to a piece of hardware you bought because a company makes the hardware that you pay through the nose and hands out drivers for for free to make sure you can use and therefore buy the hardware in the first place is *absurd*.
Secondly if you can't do business in the GPU space because GPUs don't make enough money (bahahahaha, again doesn't pass the laugh test) then don't make GPUs, go open a coffee chain.
Also Nvidia are having no issues licensing decades old patents thank you very much.
By the way, like I said, even if what you're saying here is valid (and it isn't, lets be clear) - how exactly could nvidia possibly hope to enforce it?
More importantly who cares what nvidia think. They're in no position to enforce regardless. They're not going to send the GPU police to your DC to check if you've been naughty or nice. So your card dies in half the time their overpriced gear does. I bought a $1000 card, got two years use out of it by which time it's obsolete anyway, saved myself $9000 and oh dear they won't cover it under warranty (they couldn't prove regardless), oh dear what a shame never mind.
GeForce and TITAN GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation, where there are often multi-stack racks
Also this, right here, is bullshit. That's my problem nvidia, thanks.
I've been telling people for years that the nvidia DC-approved GPUs are a complete waste of time and money, hilarious that it seems people have finally cottoned on to nvidia's price gouging (which is so extreme that it'd embarrass even Apple) - and even more hilarious that nvidia think they can do anything to stop it. Stop ripping people off and people will buy your "enterprise" (enterprise here is a synonym for bullshit, as it usually is) gear.
Since it's only "abuse" if graphics-grade chips are used in for-profit data center services (except for the established lucrative market of graphics cards for profitable bitcoin mining), "abuse" would seem to be a synonym for "flaw in Nvidia's business model". "Abuse" is a word which also implies a privilege had been granted with conditions imposed and accepted. Otherwise, there's no line one crosses to consider an "abuse". However, what we're seeing is a manufacturer blatantly attempting to forbid an *application* which competes with another one of their products, once that application catches their attention. Nvidia is far more clearly the abuser in my eyes.
..In a data centre.
Horrible. The Windows drivers are bad enough, the Linux ones.....
Nouveau is practically crippled by Nvidias slow return on firmware, the proprietary drivers are truly awful. Ugly boot, no Wayland and other non-standard weirdnesses and I'm still not seeing any better performance.
Or is AMD still hard to acquire due to the bit-miners hogging the supply?
Your choice is Nvidia or AMD.
I've had 3 or 4 machines with AMD cards and all have suffered from the AMD drivers crashing and having to restart.
I've also had 3 or 4 machines with Nvidia cards and none have had issues with the Nvidia drivers.
I know which I prefer.
"I know which I prefer."
Horses for courses surely.
You can overclock AMD cards to a level that if they crash then you need to reboot. Nviida generally stops you from doing this, and their crash recovery is pretty quick (2-3 seconds).
On stock settings, both are pretty impossible to crash the drivers. With a modded BIOS on AMD, you can get quite a lot more performance (dependent on silicon lottery) wheras you pretty much can't edit the BIOS on nVidia, and that limits OC options.
AMD will also provide drivers based on workloads, so you can get actual blockchain drivers and settings to avoid unexpected mem speed jumps from p1 to p2.
So it comes down to what your planning to do with the card, how much protection from messing with the firmware do you want, and what attitude you expect from the manufacturer.
Fair points, but my original post was about how, in my experience, the drivers for Nvidia were more stable than their AMD equivalent without any OC or mods considering how the OP was complaining about the Nvidia drivers.
I've actually had most the opposite experience then.
AMD drivers are much better than they were, and AMD (likely due to their underdog role currently) are being much more OSS friendly the last few years.
I've had great experiences with AMD GPUs for several years (not exactly top of the line in hardware*, but usable, no issues and works without headaches and necessity toward weird complicated workarounds), but recently switched back for a number of reasons only to find the same old situation.still prevailing with Nvidia on 'Linux while AMD drivers have become much more native to the ecosystem.
* Pretty much guarantee you are going to run into some issue, if you buy the newest release of hardware (on any platform) but OSS platforms specifically. The venders generally couldn't give a monkeys for any platform but the biggest and the OSS projects making up for that lack of monkeys, haven't had time to get up to speed with the new device.
Because for a lot of parallel processing in academic situations the software is written with CUDA in mind. A lot of researchers use the resultant software for their processing (think that photographers don't write Photoshop in order to use it).
Until the same processes can use openCL and non Nvidia cards the researchers are stuck over a barrel and have 3 choices:-
1) pay a FORTUNE for the server grade Teslas
2) pay the similar or a little more for reduced capacity Quadro cards
3) use the GeForce risk getting sued
It's not 2008, this driver thing isn't really worth talking about, AMD and nvidia have fairly solid linux drivers that perform perfectly fine. DKMS solved most issues and if you want open source nouveau is perfectly (more than, in fact) adequate these days.
Also AMD gave up on being competitive around the launch of the R5-9 which is why I abandoned the laborious wait for a RX Vega and bought a 1080ti hybrid instead.
This is a fairly big thing because if you're a research group interested in using GPGPUs to accelerate your code it just became £8K more expensive to buy a server with 4 GPUs which, depending on your code, might be 5-10% slower than the cheaper box you could get before. The nouveau drivers are fine for video output but no use for using them for parallel processing.
It isn't like the GPU can detect whether it is being used in a datacenter. This is no different than a warranty on a product intended for residential use that declares it is shortened or void when used in a commercial environment. Didn't stop me from using consumer model TVs in a commercial environment, because have you seen what they charge for commercial TVs - and how they're always a few years behind the state of the art?
I'm going to buy a residential water heater for my business even though it means the warranty on the tank is shortened from 6 years to 3 years, because 1) if you make sure you replace the anode before it erodes away the tank will last a very long time and 2) the otherwise identical model for commercial use costs $500 more and has the exact same 3 year tank warranty so AFAICT there's no advantage to buying the commercial model.
All NVidia can do is refuse to support you if you have issues with cards they know you're using in a datacenter environment. Good luck trying to serve a cease and desist if they somehow found out how they were being used.
Exactly!
It's practically uneforceable and they know it.
It is however, enough of a question mark to dissuade certain big corporates from trying out this route; which is precisely the goal of this amendment. All a corp counsel needs, is to declare a certain option as legal grey area and the whole plan is killed stone dead.
"1) if you make sure you replace the anode before it erodes away the tank will last a very long time"
I'm on year 15 on 8 water heaters with 3 year warranties (5 electric, three gas). I replace the anodes every other year. It's cheap insurance. Most folks would have probably had to replace all eight units a couple of times by now. So far, I've had to replace a couple of heating elements, a handful of thermostats, and one burner.
Use a good penetrating oil well in advance of starting this project. These rods are almost universally corroded into place. I usually use WD40 in three applications, about 12 hours apart.
If your water heater is installed in a location where you can't physically install a long, broom handle sized anode due to a lack of overhead clearance, they make a version that is several sections of anode material chained together. The old rod can easily be bent for removal.
I use magnesium anodes, thus the two year replacement schedule. If you need/want aluminium (or aluminium/zinc), you can probably go three or four years.
Note that starting to replace the anodes late in the water heater's life is probably a waste of time and money. This is a bit of maintenance that needs to happen right from the git-go.
This post has been deleted by its author
It isn't. It's a light machine oil dissolved up in the leftpondian equivalent of white spirit. (Somewhere between white spirit and turps substitute IIRR. Plus, if my nose is correct something to give it a sweet pong.
I've not tried the auto transmission fluid homebrew. None readily available as we tend to have pudding stirrers here in Blighty. My guess is that the acetone penetrates very well and takes some of the auto fluid down with it. That has EP properties, no? Maybe a little auto fluid does dissolve in the acetone. That would suggest that using far less that the normal grease monkey recipe of 50 / 50 might be better??
I've long intended to do some experiments when I get some time.
Anyone know which type of auto fluid is "best"?
In the meantime LACO do a very good penetrating release agent - Laco Rustbuster. I particularly like the pull out dispensing tube - much more effective than smothering everything with an aerosol spray in the hope that some will hit the spot.
It
I actually had the anode replaced on a brand new small electric water heater I had installed as a booster heater in the kitchen for the dishwasher, because it came with an aluminum anode and I wanted magnesium. The problem with aluminum anodes is that when they degrade they expand, so you can't always get them out and even if you do they'll have left a lot of crap in the bottom of the tank (that's what most of the sediment found in a typical tank is unless you have well water)
When I had that replacement magnesium anode installed I had the plumber add some PTFE tape to the threads so it'll be easy to remove to check/replace. I plan to have the same done for the bigger water heater, except there are two anodes to replace - one of which I'll probably allow to erode in place because it would be a hassle replacing the smaller secondary one installed in the cold inlet.
"I've not tried the auto transmission fluid homebrew. None readily available as we tend to have pudding stirrers here in Blighty. My guess is that the acetone penetrates very well and takes some of the auto fluid down with it. That has EP properties, no? Maybe a little auto fluid does dissolve in the acetone. That would suggest that using far less that the normal grease monkey recipe of 50 / 50 might be better??"
The problem that I've had with making the stuff is that they tend to not go into solution very well. Desperately needs some form of emulsifier involved, and that's beyond my ken. Howes makes a lovely penetrant that smells of cinnamon faintly, and that's my go-to instead.
(Also note: I've been surprised, but many modern transmissions with the correct amount of pedals use ATF, despite the obvious indication that the fluid is for an entirely different application.)
“I work in R&D in drug discovery in a for profit company; who knows if Nvidia think what we do is "research" or not...”
If they’re ill, they probably don’t care.
[Is this just a way to ensure the various cloud operators aren’t able to use cheaper cards & sell premium services without Nv getting what they regard as their cut?]
"Now the end user can be told what he can or cannot do with the hardware he purchased?"
In effect NVidia are informing their customers that the manufacturing tolerances are pushed beyond the edge, their gear is unreliable and unfit for purpose. Take note and adjust your purchasing decisions accordingly.
I imagine that the marketing dept. have been insisting that the driver & CUDA devs implement some kind of "datacentre" detection system to help enforce the licensing constraints too, so I'd give some consideration to moving away from CUDA while you are at it. ;)
The thing is their gear is fit for purpose, we've been running geforce cards for nearly a decade, and the only cards that have had issues were in desktops. The server manufacturers can build servers that can cope with the heat, it's presumably simply the case that Nvidia would rather you bought less of their enterprise cards than spend the same money on geforce cards
Equally, if you have 80 students being taught macine learning you should apparently be spending £180K+ to get a DGX1 with 8 GPUs for them to share rather than spending £140 buying 10 servers and cards where they can all get access to their own GeForce Card.
"Equally, if you have 80 students being taught macine learning you should apparently be spending £180K+ to get a DGX1..."
That's pretty much happens at school. Well, not the DGX1...
You get shown how to do stuff using a GPU as part of a general ML and inductive inference course (it's usually an optional lab). The university won't spring for a GPU of any flavor, so you get to use whatever gaming laptop the students have, or a remote session on your fun box.
I'm going to have to remind them that they can't do it in data centres now....
> Intel, AMD and Nvidia have been doing this for ever. Their consumer stuff is just enterprise with different software/laser cut features
Yeah, but did they ever say you were *legally forbidden* from using their consumer equipment for enterprise applications? (Except military and medical uses, for obvious reasons)
Hard drive manufacturers have been selling "consumer" and "enterprise" drives for years. Those in the know (i.e. who measure this stuff) have found that the "consumer" drives are just as reliable: see Backblaze research passim.
But so far, the manufacturers just recommend or support the drives for certain uses. They didn't say "our drives contain proprietary firmware, and if you use our consumer drives for business purposes you are in violation of your EULA and we will sue you"
What's next? If you buy a consumer TV, you can't install it in a B&B room? If you buy a set of cutlery, you have to sign an EULA saying that you won't use it in a restaurant?
The software companies have gotten away with this for a long time - e.g. if you buy MS Office Home & Student edition, you are not licensed to use it for business purposes. But extending this sort of nonsense to hardware in general - just because most hardware these days includes firmware of some sort - is a very slippery slope.
What if Apple were to decide that the iPhone is for home use only, and if you want to use it for work-related activity you have to buy an additional licence?
A spokesperson for Nvidia told The Register that the licensing tweak was to stop the “misuse” of GeForce and Titan chips in “demanding, large-scale enterprise environments.”
Howabouts you go f&%k yourselfs and let me do what I want with what I have paid for?
I am going to do it any way, theres FA you can do about it!
Why is it 'abuse' anyways? and why is one type of 'abuse' a licensing issue an another ain't?
if I decided to massively overclock a card, push 1KV where it's expecting 5V or dissolve the thing in acid, that can be construed as 'abuse' of the card but no-one is gonna come round and try and sue me for doing so, the worst that would happen is laughter if I tried to claim a replacement under warranty.
Consumer cards aren't designed to be run at full power 24x7. OK, state that and state that it's not covered by warranty if it breaks, that should be the beginning and the end of ir.
GeForce and TITAN GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation, where there are often multi-stack racks.
That's strange. It's almost like Nvidia has forgotten that numerous enthusiasts have built systems with complex, 4-way SLI that use far more exotic methods for handling extreme thermal requirements (water, liquid nitrogen) than you'd find in a datacenter.
I can think of multiple ways to deploy servers with consumer GPUs outside of a traditional datacenter, yet minimize the increased physical and environmental risks of doing so. Nvidia's hanging this all on a nebulous "datacenter" concept is farcical.
Isn't 2018 supposed to be the year when organizations begin to move 100% of their workloads to the cloud? And also they when machine learning explodes into widespread use? With Nvidia seeking to increase the cost of datacenter usage of their products by 4,000%, can it be both?
It's just two very large (cluster) computers.
Defining exactly what constitutes one computer isn't massively easy (think IBM mainframes on one extreme, pi clusters on another, and high power workstation for rendering 3D for another). Can't do it by number of GPUs, otherwise SLI is dead. Can't do it by CPU count, because multiprocessor stuff is common in workstations. Could possibly do it by motherboard count if you're willing to abandon blade and VM users I suppose.
I'm seeing this more as a cash grab on VDI users to be honest - they don't need anything like the horsepower of the Tesla's but benefit a lot from a bit of 3D acceleration.
Seems strange to me that no one here noticed that this is primarily directed at forcing Microsoft, Google and Amazon to buy server parts instead of consumer.
I’m pretty sure this is an effort by NVidia to
A) sell more data center GPUs
B) give Cisco, Dell and HP a business case to continue building NVidia mezzanines for their servers
C) force companies to pay for ridiculously overpriced technologies like Grid on Vmware as opposed to simply using regular desktop drivers on Hyper-V which is A LOT less expensive. And by a lot less, think in terms of about $100k for a small 200 user VDI environment.... just for the driver licensing.
This isn’t targeted at small companies or users. This is targeted at companies like Amazon who are “cheating” NVidia out of probably a hundred million dollars a year by using consumer grade cards.
"The spokesperson said:
GeForce and TITAN GPUs were never designed for data center deployments with the complex hardware, software, and thermal requirements for 24x7 operation"
Completely invalidated by the fact that they also said :
“No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted.”