Reply to post: Re: Google still kicks NVIDA in terms of power...

Nvidia says Google's TPU benchmark compared wrong kit

annodomini2
WTF?

Re: Google still kicks NVIDA in terms of power...

My interpretation of the table is that the Tensorflow isn't for training, but run-time operations.

When you've created your trained environment that is developed to run on this architecture, you can cut your on-going operating costs by using these as the equivalent power consumption is much, much lower.

If the system can function adequately in fixed point 8bit, then why not run it in that scenario. What is the point of wasting all that power and subsequently money for a field operable system that is adequate for the job.

Google have the resource to develop something like this and if it either allows them to increase functionality or save money or both, then they would probably invest in it.

Yes they could buy GPUs off the shelf, but just because something is available to do it this way does not necessarily mean it's the best solution for the job.

FPGA's are great at being flexible, but they are usually not very power efficient.

I get the impression you are looking at this from an academic/development environment perspective rather than a production environment perspective, which is where Google are operating these devices (assumed).

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon