"Wacom tablets are now supported ..."
That must be an improvement to Wacom tablet support because my Wacom Intuos tablet has been supported, out of the box with no setup needed, for a long time.
Linus Torvalds has given the world version 4.11 of the Linux kernel. “So after that extra week with an rc8, things were pretty calm,” Torvalds posted to the Linux Kernel Mailing List, adding “I'm much happier releasing a final 4.11 now. So what do we get this time around? Among other things, Linux is now better at hot- …
From the linked article at kernelnewbies:
wacom: Support 2nd-gen Intuos Pro's Bluetooth classic interface
wacom: generic: support generic touch switch, add vendor defined touch, support LEDs, add support for touchring
Incremental Wacom support. A good thing.
In perfect timing for my new Ryzen PC coming tomorrow, 4.11 has fixed a couple of Ryzen issues: ALC1220 audio codec support (Kaby Lake isn't the only one to benefit from this) and a fix for a CPU soft lock in mwaitx() (Kernel Newbies actually missed this one since it was a last minute submission). It was really 4.9 and 4.10, though, that sorted out most of the Ryzen support.
Just waiting for ELRepo to buuld a kernel-ml 4.11 package and then I'm all set for running CentOS 7.3 on Ryzen...
Intel's Turbo Boost Max Technology 3.0, technology that lets a CPU figure out which of its cores is fastest and then increase its clock speed
Missed the Intel announcement on this: does this mean that we can mix-and-match Intel cpu's in multi-cpu systems, or is Intel's multi-core CPU quality so poor that a 2Ghz quad core may actually have one (or more cores) rated at 4Ghz?
Making silicon chips is not an exact science. For an 8 core chip, due to the small defects which will always be present, some cores will be able to operate safely at higher frequencies than others on the same chip.
It's not that Intel's quality is bad - it's just that no-one has worked out how to make absolutely 100% pure silicon, slice it into wafers, and then put several billion transistors on it without making a single error.
>Making silicon chips is not an exact science.
The question is what is the real degree of variation and is it sufficient to warrant the additional complexity of handling variable clock speeds. I suggest unless the numbers are significant then it probably isn't worth doing. However, having implemented the necessary technology, Intel can now add further granularity (and thus price points) to it's processor families, depending on how many and by how much the various cores can be 'overclocked' from the base rating given on the box.
The question is what is the real degree of variation and is it sufficient to warrant the additional complexity of handling variable clock speeds.
My guess is there isn't a lot of added complexity, because they were already supporting varying individual core clock speeds for TurboBoost. The original purpose of TurboBoost was to improve single-threaded performance -- since the idle cores weren't producing much heat, the one doing all the work could run faster without exceeding the chip's total thermal dissipation capabilities. This sounds like an incremental addition to that scheme.
It's more than just the variability of the silicon. It's also location of the core within the dye (and quality of the thermal interface that's only partially controlled by Intel). In addition, for some tasks performance scale better with raw clock speed than number of active cores. And then there's power supply issue - not only system can only deliver so much amps but also chip can distribute certain current between units.
In the past I have written device drivers for Linux, I do not need educating about it.
But - as I wrote - I wonder if there is somewhere I can see what has recently been improved in the workings of the kernel, rather than long lists of newly-supported devices that I do not care about.