@Steven Jones, @Ken Hagan
> "Guessing" that throughput can be estimated from power output is exactly that - guessing.
Yes, and moreso without info on cache, clock, architectural details etc. It was a guess. I was trying to analyse their implication (not claim!) of how speedy it is. I'm aware of clock/power tradeoff (though not the precise tradeoff; need to get hennesey & patterson - the 20%/double-the-cores ratio is higher than I'd expected)
> Throughput per watt has increased enormously over the last few years, and as a new chip, then I'd expect this one to have gained some benefit.
pretty much the same benefits as an equivalent process, hence my comparison with a modern quad-core - I *assumed* they were the same process. They didn't provide info on it and I'm aware it could have been an older process. I still suspect that the wattage/heat is the most reliable indicator of speed if I can interpret it properly...
> especially if it supports single consistent memory models
by my reading of the article ("The cores communicate by means of a software-configurable message-passing scheme using 384KB of on-die shared memory."), it doesn't. More load on the programmer, less on the hardware, and good thing too IMO. We need to go down that road. Also see my comment on microsecond comms overheads (I'll save you the effort: "ouch")
> (the AI crap) - we're agreed.
@Ken Hagan:
> If you are willing to accept (say) half the per-core performance, the per-core power consumption drops by an order of magnitude.
This confirms how little I know about this.
> And the programming model...
I'm not disputing that, just performance
> finally proves Sun were on the right track with Niagara.
I'm damn sure that general bulk-SMT architecture ***was*** right for the majority of computing. It seemed obvious to me that we were going down a wrong alley just by the incredible complexity of modern cores.
But I was purely trying to inject some reality into intel's marketing.