
What's 3 Teraflops is good for?
I bet you the build's going to be nice and toasty in the winter.
The University of Toronto's SciNet consortium, which provides supercomputing oomph for colleges, universities, and research hospitals across Canada, will today announce that it has selected IBM's iDataPlex servers using Intel's new "Nehalem EP" Xeon 5500 processors to create the most powerful supercomputer in Canada. The …
Are the calculations not compatible with this technology? F@H has shown massive increases in compute power by adding a GPGPU client to its stable. There are examples of commodity graphics cards running CUDA enabled GPGPU calculations in a single PC chassis.
The nVidia G80 chip contains 128 simple scalar stream ALUs at 1350MHz with MADD (2flops only), giving you roughly 345.6 Gflops on the older 8800 GTX. Newer G92 and G200 variants are even faster still.
Not all calculations are suited to GPGPU environments and their comparatively puny random access storage.
I'm sure the sysadmins and scientists know exactly what they're doing here, and have chosen the best fit for their intended applications. I just wanted to provide food for thought.
Well the nvidia GPUs are very fast at some times of work when doing 32bit floating point. They are less than 1/10 that speed at 64bit, and much slower at general purpose tasks. The xeon is a general purpose CPU and hence runs lots of things very well.
But yes if you happen to only need to do massive 32bit floating point work involving crunching through lots of data in the same way without much decision making to be done, then the GPGPU choice is much more efficient. I am pretty sure that is not what they want to do.
A GT200 series with 240 shader processors at 1.4GHz is rated at about 1Tflops (at 32bit), but it might not be comparable to the 300Tflops this monster is capable of (since that's probably 64bit and more generic work).