Reply to post: Re: Google still kicks NVIDA in terms of power...

Nvidia says Google's TPU benchmark compared wrong kit

Anonymous Coward
Anonymous Coward

Re: Google still kicks NVIDA in terms of power...

I think you've rather missed the point of the article.

It's only "twice as fast" on the inference phase because it's a single-purpose inference chip.

It is of no use for training, which is the actually hard bit of machine learning. That is obvious to anyone with a clue because the table figures are comparing TOPS in INT8 to TOPS in 32 bit floating point. That's an order of magnitude more complex a set of operations, so you'll have to forgive them for only using 3.3x more power when everything is going at full pelt (which it never does).

Putting it in simple terms, Nvidia are saying that the comparison is flawed because the chips do different things and the comparison is unfair because it targets their legacy tech. Admittedly they then go on to make their own comparison, but they've probably got a point. GPUs are readily available and easily targeted in ML frameworks. Custom silicon is not. Given their flexibility they're going to be the superior choice for the foreseeable future, probably at least until FPGA-on-CPU packages from intel come along with decent support in the standard frameworks.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon