Thanks Intel's Ettins.
I can hear the throbbing of the devs' heads from here...
I initially wrote "dev's heads". And it's entirely possible a dev's head will have bifurcated by the time they've sorted this out.
The mixture of performance and efficiency CPUs in Intel's 12th-gen Core processors, code-named Alder Lake, hasn't just caused problems for some Windows gamers – it's led to complications for Linux. Phoronix' Michael Larabel noticed that Release Candidate 1 of the future kernel ran slower than expected on Alder Lake …
On the design level, is this hardware architecture something that Intel and other hardware providers should provide a kernel module for and which offers knobs which can be twiddled at the compiler, interpreter, or application level, or is it something that intrinsically changes the design of the Kernel? Given the infinite number of ways that hardware can be optimized, I would hope it is more of the former and less of the latter.
On the design level, is this hardware architecture something that Intel and other hardware providers should provide a kernel module for and which offers knobs which can be twiddled at the compiler, interpreter, or application level, or is it something that intrinsically changes the design of the Kernel?
More the latter. This is essentially a scheduling issue. The scheduler can be made modular (indeed in some of the true microkernels it isn't even a kernel service) but things in Linux are not arranged so conveniently.
With processor specific tweaks like this, in what I assume is code run regularly and therefore performance sensitive, do people without these fancy processors see a performance drop because of the redundant code, or is there some cunning boot time optimisation (since I don't think anyone supports hot swapping the CPU)