back to article Cisco shoves more GPUs in AI server for deep learning, still doesn't play Crysis

Cisco has beefed up its C480 AI/machine learning server, adding a faster GPU interconnect and more GPU slots while losing two CPU sockets. Fairy in the woods If you've got $1m+ to blow on AI, meet Pure, Nvidia's AIRI fairy: A hyperconverged beast READ MORE The C480 M5 is a 4U rackmount 4-socket Xeon modular server with up to …

  1. stiine Silver badge
    Happy

    well, hell

    They've re-invented the mainframe. Congratulations server makers.

  2. DJV Silver badge

    Yeah, but does it play Cry...

    ...oh wait...

  3. The Count
    Unhappy

    Oh great.... Just what I wanted

    The 1970's all over again. Ahhhh! where are my bell bottoms?

  4. kbuggenhout

    And then some prove that for machine learning, there is almost no difference between 1 machine with 8 or 16 v100 sxm2 gpu’s and a number of 2 or 4 gpu’s in a cluster. Efficiency diffrerence less than 2%. While having a shitload more efficiency in scale. As you know can use any number of gpu’s , which can be adapted to the workload, and create the ml machine you need at job submission time.

    Why fork out big $$ for a monolith that will have IO problems, and is busy with one problem. If you can truly compose the machine you need. I am very wary of these big monolyths. All for agile modular workings.

    1. Aladdin Sane
      Headmaster

      GPU's what?

  5. Korev Silver badge
    Flame

    Power/heat

    Our GPU racks are already half-empty as the power/heat dispersal is too much for racks originally designed for blade servers; I suspect servers like these would be even worse....

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like