
Wait, what?
"...and burns just over 4 teraflops of juice..."
More than the hand-wringing over parallel computing, the mounting electricity bill is the limiting factor holding back the growth of petascale systems. The recurring joke at the SC10 supercomputing conference last week was that we cannot build exascale systems that require their own nuclear power plant to juice them up. And that …
You should know that here at The Reg, reporters/writers never actually reread (or sometimes spell/grammar-check) their postings. Heaven forbid someone actually types up their posting in MSWord/OOWriter and copy/pastes it into their publication poster at the very least....
Points for no spelling errors at least.
Err... whyever not?
Because Greens will dump core upon hearing that a few kilograms of enriched uranium are going critical in a downtown office building?
Sod that! It sounds like a perfect market driver for those small-scale nukes on the drawing boards.
...change a bit from 0 to 1 or 1 to 0 it is necessary to charge or discharge the capacitance of the internconnect, inside the chip and/or on the board.
So improving flops/watt involves.
1. Shorter wires.
2. Thinner wires.
3. Slower Clocks
4. Lower Voltages.
5. Better software.
6. RISC chips.
We are pushing the envelope on 1-4 already. Probably some wiggle room in 5. I think we have a long way to go with 6. A chip with an 8 or 16 bit instruction set should use a heck of a lot less power.
The reduced instruction bit route could take us to a to a chip with one bit instruction set. For example 0=do nothing 1= multiply registers 1-8 by registers 9-F. If you don't need a multiply, the chip uses "almost" no power.
This implies a purpose built super with special chips for the most needed functions instead of a general purpose design.. Even crunching numbers with GPUs is a giant step in the right direction.
@captain tick tock: I don't run better on beer, but i run happier.