But can it run Crysis?
Pssst.... build your own machine learning computer, it's cheaper and even faster than using GPUs on cloud
If you’ve been thinking about building your own deep learning computer for a while but haven’t quite got 'round to it, here’s another reminder. Not only is it cheaper to do so, but the subsequent build can also be faster at training neural networks than renting GPUs on cloud platforms. When you start trying small side projects …
COMMENTS
-
Tuesday 30th April 2019 11:09 GMT Steve Button
Horses for courses
It may be cheaper if you are running jobs 24 x 7, but if you are just dabbling it might be that you are running a job for a couple of weeks. Or you might even be able to scale up that job, and run in a day, but with huyuuuuge amounts of power but for a shorter period of time. Imagine how much it would cost you to build a similar set up in your own lab!?
It may be fun to build your own rig, but there are very few use cases where this actually makes financial sense, especially if you factor in the cost of electricity.
-
-
Tuesday 30th April 2019 11:39 GMT MonkeyCee
Re: Horses for courses
That's pretty much the case. It's why I dabbled in crypto mining, because we sue similar boards.
It's more using 3x $700 pieces of kit, and using less precision. If you're not using that precision in the first place, then you can't miss it. You also can't stick the 1080 ones in a "data centre". So make sure they have crysis installed on them and a few LEDs for when the man comes around :D
2080ti runs in about $1100, and you'll want a server grade PSU if you're running it 24/7. Mainly you are trying to avoid paying the nVidia licencing tax, so it helps to have some institutional cover. But you can, broadly speaking, pay half the upfront cost and none of the ongoing cost of a V100, if FP8 is close enough for you.
-
-
-
Tuesday 30th April 2019 18:44 GMT Rob Davis
up-board.org and other edge AI solutions for low latency
Check out up-board.org up-shop.org and other edge AI solutions as well (I don't work for them nor have financial interest)
By "edge" we mean that the AI is done locally rather than remotely in the cloud.
Such AI on the edge has the benefit of low latency - minimal delays for sending and receiving information to be processed.
After all, lifeforms with intelligence don't rely on a remote service.
It also means complete control over your system, benefits include data privacy and security.
-
Wednesday 1st May 2019 01:31 GMT Updraft102
Nvidia's harsh EULA rule that states no giant data centers are allowed to use the cheaper 1080s
No, it doesn't, and cannot. GTX 1080s are hardware. The EULA in question governs the driver. The courts need to severely curtail this "whatever the software maker wants" shrink-wrap contract nonsense. I'll go with the well publicized picture of Linus Torvalds nonverbally expressing his opinion of nVidia here...