Re: Gimme speed
Few things are CPU limited that can't work better with a bit of rejigging and some parallel processing (eg. GPU processing).
But things plateaued really quickly because they hit physical boundaries.
Nothing stopping people making a core without a consistent clock across it. It's perfectly viable, theoretically. But it would mean architecture changes, most likely. Or it's performance for synchronous tasks would just fall back to "waiting for everything" and you would see no speed gain.
Heat and chip size are limiting... you need a very tiny, very hot chip, which is really bad for materials that you want to cool, where you just want everything to be spread out and cool. It's like putting a soldering iron bit on your motherboard, basically. Just because it's small doesn't mean you can stop it destroying itself / it's surroundings by blowing a fan near it.
I think we'd see much bigger gains, anyway, from things like memory that's closer to the chip without relying on tiny local caches to keep the CPU fed (isn't that the problem with things like Rowhammer, etc. too?). If we could bring the RAM into the CPU, and things like persistent RAM, then you'll probably see greater performance increases as the 3GHz CPU will always be kept busy as opposed to a 5GHz CPU that's constantly waiting on the RAM for data.
To be honest, I'm at the point where - despite as a kid looking at a 4.77MHz chip and being unable to imagine the speed of 1GHz, and then achieving it in only a few years - I look at the top-of-the-line chip frequencies and don't see them changing anywhere near as much in the next decade or so.
With virtualisation, parallelisation, etc. however it won't matter much for almost any "ordinary" workload. And HPC is moving towards GPGPU, custom chips etc. anyway. We'll see a quantum computer before we'll see a 10GHz home machine.
I think I'd rather my servers had 100 cores idle at 3GHz than anything else anyway. VM running slow? Add another half-dozen cores and some more RAM into it. Pretty much normal stuff (SQL, etc.) will scale just fine.
The problem there is the licensing is going to become insane unless revised (but I run Windows Server Datacenter anyway, so I don't particularly care for most things!).
It will lead to the point, though, where one server could in theory allocate 10 cores per client (to things like terminal services, etc.) and be just as fast as anything you could do locally, and at that point you might see a push towards thin-stuff again. Until the next fad-cycle, of course.