
Supercomputer on a chip?
Gaming is of no great interest to me, but the sheer power of this generation of graphics cards is. With an expected compute power of over 2 teraflops and the standardised DirectCompute interface, how much longer will "serious" graphics and audio software be able to get away with just using the tiny number of FPUs in the CPU? I can perhaps understand why the developers of such software haven't in the past moved over to using the GPU because of the differing SDKs required by vendors, but surely, the advent of DirectCompute leaves them with no excuse.