back to article Pssst.... build your own machine learning computer, it's cheaper and even faster than using GPUs on cloud

If you’ve been thinking about building your own deep learning computer for a while but haven’t quite got 'round to it, here’s another reminder. Not only is it cheaper to do so, but the subsequent build can also be faster at training neural networks than renting GPUs on cloud platforms. When you start trying small side projects …

  1. Aladdin Sane

    But can it run Crysis?

    1. Paul Kinsler Silver badge

      But can it run Crysis?

      Run Crysis?

      This is a machine learning article, so the question should be ... "Can it *play* Crysis?" :-)

      1. Aladdin Sane

        Re: But can it run Crysis?



        Both is good.

        1. Glen 1 Silver badge

          Re: But can it run Crysis?

          It will probably evolve to use a third party aim bot.

          Why 'git gud', when the path of least resistance (and thus 'fittest' in the AI sense) is to cheat?

  2. Steve Button

    Horses for courses

    It may be cheaper if you are running jobs 24 x 7, but if you are just dabbling it might be that you are running a job for a couple of weeks. Or you might even be able to scale up that job, and run in a day, but with huyuuuuge amounts of power but for a shorter period of time. Imagine how much it would cost you to build a similar set up in your own lab!?

    It may be fun to build your own rig, but there are very few use cases where this actually makes financial sense, especially if you factor in the cost of electricity.

    1. Steve Button

      Re: Horses for courses

      EDIT: I take that back. Those p3.2xlarge (V100) are $26k per year. Even the p2.xlarge (K80) is nearly $8k per year. If you can beat that on a $700 piece of kit then fill your boots.

      1. MonkeyCee

        Re: Horses for courses

        That's pretty much the case. It's why I dabbled in crypto mining, because we sue similar boards.

        It's more using 3x $700 pieces of kit, and using less precision. If you're not using that precision in the first place, then you can't miss it. You also can't stick the 1080 ones in a "data centre". So make sure they have crysis installed on them and a few LEDs for when the man comes around :D

        2080ti runs in about $1100, and you'll want a server grade PSU if you're running it 24/7. Mainly you are trying to avoid paying the nVidia licencing tax, so it helps to have some institutional cover. But you can, broadly speaking, pay half the upfront cost and none of the ongoing cost of a V100, if FP8 is close enough for you.

  3. Stuart 2


    If you have an MSDN subscription, perhaps through work, then the free credit that you get allows you to run a decent machine learning rig in the cloud for several days a month.

  4. Rob Davis and other edge AI solutions for low latency

    Check out and other edge AI solutions as well (I don't work for them nor have financial interest)

    By "edge" we mean that the AI is done locally rather than remotely in the cloud.

    Such AI on the edge has the benefit of low latency - minimal delays for sending and receiving information to be processed.

    After all, lifeforms with intelligence don't rely on a remote service.

    It also means complete control over your system, benefits include data privacy and security.

  5. Updraft102

    Nvidia's harsh EULA rule that states no giant data centers are allowed to use the cheaper 1080s

    No, it doesn't, and cannot. GTX 1080s are hardware. The EULA in question governs the driver. The courts need to severely curtail this "whatever the software maker wants" shrink-wrap contract nonsense. I'll go with the well publicized picture of Linus Torvalds nonverbally expressing his opinion of nVidia here...

    1. JLV

      the video of it has verbal components aplenty

      wonder if he’s a Johnny Cash fan ;-)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021