They've re-invented the mainframe. Congratulations server makers.
Cisco has beefed up its C480 AI/machine learning server, adding a faster GPU interconnect and more GPU slots while losing two CPU sockets. Fairy in the woods If you've got $1m+ to blow on AI, meet Pure, Nvidia's AIRI fairy: A hyperconverged beast READ MORE The C480 M5 is a 4U rackmount 4-socket Xeon modular server with up to …
And then some prove that for machine learning, there is almost no difference between 1 machine with 8 or 16 v100 sxm2 gpu’s and a number of 2 or 4 gpu’s in a cluster. Efficiency diffrerence less than 2%. While having a shitload more efficiency in scale. As you know can use any number of gpu’s , which can be adapted to the workload, and create the ml machine you need at job submission time.
Why fork out big $$ for a monolith that will have IO problems, and is busy with one problem. If you can truly compose the machine you need. I am very wary of these big monolyths. All for agile modular workings.
Biting the hand that feeds IT © 1998–2022