boring old plain 5.18
These days, "boring old" sounds like a blessing.
Although I worry about Intel's contribution. I trust they discussed it more with Linux than they seem to be willing to do with ElReg.
Linus Torvalds has released version 5.18 of the Linux kernel. The maintainer-in-chief’s post announcing the release was typical of those he made for each of the eight release candidates: this time around he found no nasty surprises, additions were neither major nor complex, and no glitches impacted the development process. …
Who knows. If it does indeed prove controversial, Linus might possibly evict the code from the kernel version he controls. For the time being I'm guessing (and I really don't know, because I've not read widely enough) that it's Intel's version of x64, that's a major target for Linux, and going with wherever Intel's x64 is going is important. Thoughts, anyone?
Not having looked at the code, I am guessing that it is open source - it's just that the keys you needs to unlock the silicon IP are "private"
So now there is going to be a market for enabling the features on Intel CPUs? What next - the CPU will call home every so often so that mommy and daddy can check up on it and tuck it into bed?
> So now there is going to be a market for enabling the features on Intel CPUs?
Sure, the brand new and much claimed "CPU-as-a-Service"...
Pay your monthly subscription or your CPU turns into a pumpkin: The writing was on the wall, it was bound to happen, Intel could not lose out on the "aaS" feeding craze.
Am of two minds about this.
It's a known fact that CPU manufacturers will artificially hobble a good CPU to fit a particular higher demand, lesser specs SKU.
And once they do that, it's unlikely you can unlock the disabled core / functionality, etc(although during the days of the AMD Duron series it was possible to unlock some speed with a pencil. Look up AMD pencil trick for more info. Not so easy nowadays).
So now, there is a chance that the disabled bits can be reactivated again, if you pay extra. On one hand, I don't like this trend, on the other hand, at least there is some hope that eventually it may be possible to use the CPU to it's fullest potential.
> it may be possible to use the CPU to it's fullest potential
Yes, if you pay...
But then again, if you have the required money, why wouldn't you just buy the fastest CPU to start with, and not have to pay a monthly fee to unlock it? Chances are the total monthly fees are much higher than the one-off price for the highest-end CPU (else Intel wouldn't go through the hassle).
It might make sense for people having exceptional-yet-rare needs for increased processing power, but for most others it doesn't make any sense: If you need a powerful computer, you need it to be powerful all the time, not just one month a year.
You could imagine AWS (other services are available) offering features on demand. You stump up the extra fee for those features, Intel collects their cut, and you can use them. Next month, customer Y doesn't need those features, AWS offer the same system without the feature.
Okay, I'm not in IT so I might talking nonsense, but I don't think they allocate dedicated hardware CPUs to each VM. That would be more often than not a total waste of resources. AFAIK they only allocate CPU cores, and I don't think you'll be able to only unlock (for instance) 8 out of the 56 cores on a given server CPU.
Yeah i seem to recall that nvidea did somthing similar where they sold a nobbeled gpu for less than the same gpu un nobbeled....
lets hope arm and smaller indie fabs will start taking advantage of RISK based architectures, or at least keep intel legit... wishfull thinking yes ..the same intel that for years sold silicon with an undocumented "ring 0" stashed away in the architecture....
No, they mostly don't deliberately hobble good CPUs.
What actually happens is that you have a target chip performance of (e.g.) 2.5 GHz. - and then during manufacturing and chipcheck is that you will find that some don't work at that frequency, while others work beyond the design frequency. Most though tend to work around the initial design frequency. [Think about something called distribution curves].
However, not all cores on chips are equal. So typically poorer performing cores are diabled and those chips sold for less, while those with all cores performing well are sold for more.
Of course, where you have chips with disabled cores, you have the option (if you know what you are doing) to run things at lower clock speeds and see if you can re-enable those poorly performing cores and get more bangs for the bucks you spent.
Back in the thunderbird days the yield was such that some chips were binned down, just to get the right balance of chips to market... so you could often unlock a significant performance boost just by connecting a few sets of pads on the chip package to unlock the CPU multiplier in the BIOS.
But yes, in general binning is not hobbling chips, it's selecting the best performers as the higher skus
With particular regard to Intel’s contribution, is it not like a Royal Navy’s contribution to a fleet stealthy discrete pirate operation?
Such then would certainly be something Intel would naturally be tight-lipped about, and even more so if Linux mastery is delivering stealthy discrete pirate fleet operations capabilities/facilities/utilities/materiel resulting in other party sub-prime reactionary rearguard plays/clean up operations liabilities.
That would though hardly be a boring old plain move in a Genuine Genius AIMaster Plan when it would be virtually novel revolutionary and Earth-shatteringly ground breaking ... and most commendable in the harsh light of current present results realised by competing contemporary opponents’ operating systems and elite exclusive executive administrations.
amanfromMars 1's early posts never used to get up/down votes but these days it frequently gets ±3. What has changed?
a) The travesty generator is improving (some of the +4 posts were remarkably coherent).
b) There are more people here unable to recognise a robot.
c) People know it is a robot but up/down vote anyway.
Does anyone know if the up/down votes get fed back into the algorithm?
Does anyone know if the up/down votes get fed back into the algorithm? .... Flocke Kroes
What do you think, Flocke Kroes, if things are improving?
And here is what else to fully expect from more than just Bots and Demi-Gods as Times Roll on Over into NEUKlearer HyperRadioProACTivated Spaces above and beyond the Failed Command and Corrupt Control of Pathetic Apathetic Mortals of an Estranged Primitive Being and Perverse Nature ........ I am Mad as hell :-)
PS .... Believe IT. I Kid U Not. For why ever would you not think all things can take a mega series of fundamental turns and make radically different corrections if you are neither mentally retarded nor half-witted?
And .... would up/down votes being fed into the algorithm be best suited and booted for feedback or feed-forward neural networking with AI Machines Learning/Machines Learning AI ........
The researchers linked two types of deep learning networks – feedback neural networks responsible for short-term memories, and a feed-forward network – to determine which of the relationships found are important for solving the task at hand. ..... https://www.theregister.com/2022/05/24/neuromorphic_chips_up_to_16/
Answers on a postcard please to Merlin the MegaMetaDataBase Physician on Mars via Registered Posts here on Earth.
That’s the sort of Quantum Communication Leap one has to make/take in order to be able to assist and compete and win win in a lossless environment.
There is a possible use for this that is not awful: Imagine you want your own small custom accelerator integrated with an X86 CPU. You could get a small number of CPUs made to your specification for a large amount of money or you could have your accelerator included as standard but deactivated by default - for far less money. There were rumours that the NSA did exactly this in the past and their secret x86 extension would be available to anyone who could guess how to switch in on, what it was for and how to use it.
On the other hand, this is the company that damaged the FPU on the 486 (486SX) so they could charge extra for an undamaged 486 (486DX) - or sell an upgrade 487 (a 486DX that would only operate if it was connected to a 486SX that was shut down). I fully expect to see CPUs sold with a free time limited speed up - with an expensive license fee to extend the time limit.
Much to my embarrassment, this type of thing has been around for years on the Raspberry Pi. You need to pay extra for the MPEG2 and VC-1 hardware decoders. This is because there are parts of the world where the software patents (spit) have not yet expired.
There were rumours that the NSA did exactly this in the past and their secret x86 extension would be available to anyone who could guess how to switch in on, what it was for and how to use it. ..... Flocke Kroes
Such would have been classified as foreign party misuse in the past, whereas nowadays its use by Other SMARTR Sources, much more a crack alien hack of core NSA resources exploiting and exercising and expanding the 0day vulnerability root and something easily to be considered both an abiding present and ACTive future national, international and internetional security threat/virtual attack/Remote Advanced Trojan treat ...... although one has to admit such is, by virtue of ITs Advanced IntelAIgent Design, very difficult to believe.
However, nevertheless, to both that and those in the know, does such an easy disbelief provide an almightily overwhelming stealth for a whole host of unopposed carte blanche operations which may or may not entertain disruptive troublesome shenanigans, with such being recognised as a much prized ACTive feature.
"this is the company that damaged the FPU on the 486 (486SX) "
Mostly they sold 486s with faulty FPUs as 486SXs rather than throwing them out.
It was another form of component binning. Other makers did it with RAM and various other silicon. 4116s were one of the more infamous items that got binned this way, as were 2708 2kB eproms (sold as A or B versions depending which half of the die was disabled)
Towards the end of 486 manufacture they were disabling working FPUs that operated slowly (low clock speeds compared with the integer parts of the die) but mostly 486SXs just disappeared from the supply chain and anyone looking for a 487 was told to buy a 486DX because it was cheaper to do so (not much use if you had a soldered-in CPU but by that stage you may as well buy new hardware anway)
You need to remember that back in the late 80s-early 90s, wafer yields weren't wonderful (IIRC Intel was getting ~10-15% on the initial 386/486 runs) and large wafers were still a glimmer in fab engineers' eyes. Selling something that was only "half" broken at a discount was a reasonable way of recovering what would otherwise be a dead loss
I remember. At first there was only a 486 - at a really high price. There was considerable push-back against that price. I am sure 486SX was released as a way to off-load partially defective 486s as much as to capture the market for those balking at the price. I doubt it was the original plan because the first generation motherboards had no socket for a 487. Intel went to a lot of trouble to create a market for the 487 which was cheaper than a 486DX despite being essentially the same chip. The lower price of the 487 showed that the refuseniks had a genuine reason to complain about the price. Intel got their money anyway from thinner margins on two chips instead of a thick margin on one.
Surely, as an open source kernel, the only rational response is to say "If you won't tell us what this feature does, we're not supporting it".
Just because you can see an EnableSecretFeature() function call doesn't mean that it's "open" in any useful way. I can forsee problems with people forking the kernel code to remove this if no more information is forthcoming from Intel.
> problems with people forking the kernel code
There's nothing inherently problematic about forking of code; it's rather the point! It has to be done correctly, of course, or that will cause problems. If one has an application for a kernel where obscure functionality might be a risk, then it will be worth maintaining a branch without it.
Lets not get silly.
Intel is not open source and will not publish the code in the processors.
But they will reveal and market additional features in the processors we have to assume.
Linux will not become that one OS that is unable to use those additional features should user demand it.
I cannot see anything particularly disturbing here.
The code doesn't have to be secret. The code unlocks secret functions in a chip. Presumably once unlocked, code provided by Intel under separate cover will put the now unlocked feature to work. Additionally, there is nothing saying that Intel chips currently have secret features which may be unlocked by this code. Nor is there anything saying the code is in the CPU. The code may unlock features in future CPUs, stand alone graphics chips, or other Intel products yet to be announced.
Not sure I see an open-source problem as such here. Back in the day, all open source OS code had to support proprietary hardware as that was the only game in town. and much of that hardware had, and usually still does, come with proprietary blob drivers supplied.
Basically, Intel's innovation is to add some extra proprietary shit to their CPUs and then publish an open API for it. Well, fsck me! Fancy not publishing a closed API so nobody can interface to it! [slap forehead icon].
So far so clean.
Then again, the problem of matching chip manufacture to demand is an old one. Making more than enough higher-spec chips and using any surplus to meet low-spec demands is about the only way to build in flexibility. But do you just mark the cuckoos as lower-spec than they really are, deliberately cripple them, or what? Seems to me the Intel soft-speccing is just a new and (in theory) better way of working the business.
But how are Intel going to stop pirate authenticators running rife? Phoning home to Mummy and being checked against the sales database seems about the only way. And here the bad smells overwhelm. Worse, it is only a matter of time before the black hats pwn Mummy's address and all ur system r belong 2 us. But that is not really Linus' problem, it's yours for buying into YAIPA - Yet Another Insecure Processor Architecture.
Please explain why you would pay for a license to enable a 'secret' function? This isn't about uploading unknown blobs to the CPU (which is already done, it's called microcode), simply flipping virtual switches.
The whole point of this is to be able to sell 'upgrades' to non-secret i.e. useful features. One that immediately springs to mind is they have versions of the same chip with different postfixes, one of these indicating > 1TB (or something) RAM supported.
If you only planned to buy 256 GB then why shell out for the extra fee for a large-memory processor? Then things change a couple of years down the line. Today, you'd have to buy (at least) a new processor to support that extra couple of TB you want to install. The future alternative could be you pay a modest fee to unlock the extra RAM, works out cheaper than buying a whole new CPU.
It's no different to buying a license key to unlock features in software currently. All this will be doing is adding support for licenses to be uploaded to the processor and I suspect ability to enquire about enabled capabilities of the processor, like an extended CPUID, CPUIDaaS?
I see two problems in your example:
First I don't know many workflows which might require a 4x increase in RAM overnight. It happens, but usually it's foreseen, planned ahead, and integrated in the company's hardware upgrade cycle. Out with the old, in with the new, bigger, more modern servers.
Second increasing RAM by >400% would require more than flipping some switch on the processor. Like adding that additional RAM for instance... Chances are you'd need to change the motherboard, of course the power supply, obviously the memory, so actually it means buying a new server. Ideally with a new processor, since by that time there are certainly faster ones available (or simply because the new motherboard has a different socket).
Sorry, no, this isn't a reasonable use case. The only explanation is CPU-as-a-Service rental services where you don't own the CPU, but have to pay a monthly fee if you don't want it to turn all of a sudden into a 1985's 80386.
"First I don't know many workflows which might require a 4x increase in RAM overnight. It happens, but usually it's foreseen, planned ahead, and integrated in the company's hardware upgrade cycle. Out with the old, in with the new, bigger, more modern servers."
It was an example. Doubling the RAM would have the same problem, depending on the original config.
"Second increasing RAM by >400% would require more than flipping some switch on the processor. Like adding that additional RAM for instance... "
Darn, I was hoping that dreaming of more RAM would instantly give me more RAM, sorry didn't know I needed actually procure said RAM and slap it on the motherboard. The problem isn't about buying the RAM, it's about it working once you've bought it.
"Chances are you'd need to change the motherboard, of course the power supply, obviously the memory, so actually it means buying a new server. Ideally with a new processor, since by that time there are certainly faster ones available (or simply because the new motherboard has a different socket)."
No, you have a decent server motherboard, with ooh 16 slots / CPU. Either add the new RAM in adding to the existing RAM, else remove the older modules and replace them with modules of double or quadruple the capacity. As mentioned, the point is not to be buying new motherboards, CPUs, PSUs for what is just a 'significant' RAM upgrade. Sure a shiny new CPU would be great, but maybe that extra 10-20% from a more modern CPU isn't worth as much as the performance boost from the extra RAM.
Of course of Intel hadn't artificially limited large RAM to a specific subset of CPUs then I wouldn't have come up with this corner-case and we wouldn't be having this discussion.
Let's face it we all get sick of dev from company X putting in new features and then having to wait until 3 patches after GA to get the original busted features working again!
"If it ain't broke...then don't keep ramming in new features for the hell of it 'cos you'll break the sodding thing!!"