Reply to post:

Bring it on, Chipzilla! Nvidia swipes back at Intel in CPU-GPU AI performance brouhaha

queynte

"The two-socket Xeon Platinum 9282 pair crunched through 10 images per second per Watt, while the V100 came in at 22 images per second per Watt, and the T4 is even more efficient at 71 images per second per Watt."

And then factor in unit costs. Situation seems pretty clear to me. Intel are further behind the deep-learning curve than they let on.. I have to agree with N that Intel are shooting themselves in the foot with their publicity there. People investing in that kind of architecture will fact check, so I guess Intel was trying to play a wider game by slyly trying to 'influence' non-deep-learning-techs into putting faith in Intel systems. I respect Nvidia for taking advantage here on the tech angle (bits / BERT / watts) in response to Intel's propaganda preying on lack of due diligence: a seriously bad move that undermine's their [Intel's] integrity to many involved no doubt, and smells of desperation.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon