"their general-purpose Intel Xeon CPU" - er, for $50,000 I'd want more than a general purpose CPU. I'd expect at least some fricking lasers !!!!
Bring it on, Chipzilla! Nvidia swipes back at Intel in CPU-GPU AI performance brouhaha
The machine-learning-performance beef between Intel and Nvidia has stepped up a notch, with the GPU giant calling out Chipzilla for spreading misleading benchmark results. Intel is desperate to overtake Nvidia in the deep-learning stakes by claiming its 64-bit x86 chips are more capable than Nvidia's at neural-network number- …
COMMENTS
-
-
Wednesday 22nd May 2019 13:16 GMT LeoP
Fair and square
Far be it from me to defend intel. Far as in at least a few galxies.
But in the name of fairness one has to make clear, that Nvidias GPUs have never undergone such scrutiny - and they would have fared rather poorly if they had: Just the NVENC part (which makes up a tiny proportion of the GPU) leaks the last image of every encoded stream to any Dick, Tom and Harry who create a new context.
-
-
Thursday 23rd May 2019 01:14 GMT eldakka
Re: Target audience?
"Who buys this stuff" is usually corporations for whom $600 would be a bottle of wine at an executive luncheon.
The Googles, Amazons, Microsofts, NSA's, Defence, University super computer facilities, startups who have large VC backers, and so on.
If you aren't one of them, and you want to play around with this stuff, you usually rent time from the cloud offerings of those big players.
-
-
Wednesday 22nd May 2019 15:33 GMT Gavin Jamie
Cancel your units!
"The two-socket Xeon Platinum 9282 pair crunched through 10 images per second per Watt, while the V100 came in at 22 images per second per Watt, and the T4 is even more efficient at 71 images per second per Watt."
As watts are joules/sec then it would be much simpler to say that Intel runs 10 images per joule, the V100 does 22 images per joule and T4 71 images per joule.
Or even better each images takes 100mJ on Intel with Nvidia using 45mJ or even 14mJ on the T4.
-
-
-
Wednesday 22nd May 2019 19:01 GMT queynte
"The two-socket Xeon Platinum 9282 pair crunched through 10 images per second per Watt, while the V100 came in at 22 images per second per Watt, and the T4 is even more efficient at 71 images per second per Watt."
And then factor in unit costs. Situation seems pretty clear to me. Intel are further behind the deep-learning curve than they let on.. I have to agree with N that Intel are shooting themselves in the foot with their publicity there. People investing in that kind of architecture will fact check, so I guess Intel was trying to play a wider game by slyly trying to 'influence' non-deep-learning-techs into putting faith in Intel systems. I respect Nvidia for taking advantage here on the tech angle (bits / BERT / watts) in response to Intel's propaganda preying on lack of due diligence: a seriously bad move that undermine's their [Intel's] integrity to many involved no doubt, and smells of desperation.
-
-