Six years is rather long but two years is rather short.
Long-term support for Linux kernels is about to get a lot shorter
This year's Kernel Report at the Open Source Summit in Bilbao revealed the long-term support releases of the Linux kernel will soon not be that long at all. Referring back to the list of stable kernels, which goes back about six years, Linux Weekly News editor Jonathan Corbet said: They have come to the conclusion that there …
COMMENTS
-
-
Tuesday 26th September 2023 13:49 GMT Doctor Syntax
It's about the age of a lot of Debian by the time it's released!
Currently the two previous versions are still maintained as Old Stable and the two previous to that have LTS which I believe is commercial rather than community. The oldest was released in 2015. Current Stable was released in June this year running kernel 6.1.
-
Tuesday 26th September 2023 22:01 GMT DS999
This is just the Linux kernel team's support
Nothing will stop Redhat from supporting a kernel for a decade for RHEL if they want. They are already choosing what patches to integrate so it isn't more work for them but it is less work for the kernel team who can concentrate more on the leading edge.
There aren't too many people who are running the official Linux kernel releases, but I'll those that are stay pretty well on the bleeding edge and don't want to stick with the same kernel for years.
-
-
Tuesday 26th September 2023 13:33 GMT aerogems
Like the meme
You've probably seen it. It depicts modern software development as being sort of like a jenga tower, and there's one tiny little pillar at the bottom with the note "Project some random person from Nebraska has been thanklessly maintaining since 2003"
https://www.explainxkcd.com/wiki/images/d/d7/dependency.png
-
Wednesday 27th September 2023 13:17 GMT Snake
Re: Like the meme
It's very true. The fundamental "issue", and I've posted about this before, is that most developers want to be at the cutting edge of development, be it for the glory of acknowledgment or simply the exciting challenge of the constantly-changing project. Because there is less glory on the lower levels, from 'mundane' user application development to legacy support, those project levels are constantly struggling for development hours - and this is the fundamental problem with FOSS Linux, and always will be. The big development power is in the kernel areas but everything else has lagged behind for decades now; this makes Linux the powerhouse where kernel and service support is all - serverspace - but always leaves it wanting in end userspace.
-
Thursday 28th September 2023 10:27 GMT Bebu
Re: Like the meme
《... that most developers want to be at the cutting edge of development, be it for the glory of acknowledgment or simply the exciting challenge of the constantly-changing project.》
And these same developers often have chutzpah to call themselves software engineers when they frequently have little more common sense or focus than a kitten chasing a laser pointer.
-
-
-
Tuesday 26th September 2023 14:46 GMT adam 40
Stable not in the stable
I'd like to see the old binaries hang around for longer, and not get deleted.
I'm very much an "if it ain't broke don't fix it type", so when I install a linux, I get a bunch of packages to go with it, and turn off updates, unless there is instability straight away.
However, sometime later I might want to install a program I forgot earlier. It would be better if the packages were still around so I could pick up these binaries. After all, they don't need "maintaining".
This also applied to embedded linux projects, where we fix on a certain version and don't update in the field.
If the kernel is dead in 2 years that's barely enough time for all the apps to have been built to go with it (if there are install dependencies, which there usually are.)
-
Tuesday 26th September 2023 16:49 GMT Liam Proven
Re: Stable not in the stable
[Author here]
> turn off updates, unless there is instability straight away.
*Eyes widen*
That is... not a good plan.
Never get involved in a land war in Central Asia. Never go in against a Sicilian. And never turn off updates, or your system's death will be on the line.
-
-
Wednesday 27th September 2023 07:04 GMT LybsterRoy
Re: Stable not in the stable
-- And never turn off updates, or your system's death will be on the line. --
Only because some wonderful developers insist on using the latest wizzy feature designed to make your eyes ache and your computer run at half speed. Or even worse just test for the OS version and say "to old I can't run".
I developed the same habit with Windows - I had much more stable and performant PCs for ages.
ps - still running W7
-
Thursday 28th September 2023 09:26 GMT Anonymous Coward
Re: Stable not in the stable
Yeah, but corporate products are not in question here....we're talking about Linux.
You shouldn't conflate your experiences with operating systems from $CORP with the general experience of Linux.
If you're still running Windows 7, if you've never experienced a virus, it's not because you've never been infected, it's because there are no decent AV products left for Windows 7, no Windows Defender updates etc etc...so you wouldn't know even if you were infected. You're probably under attack a lot more often than you think.
One of my honeypots identifies itself as Windows 7 and it gets hammered way more frequently than my other honeypots both in terms of the number of attackers and the variety of different attacks that are attempted...the only exception to this, is my Windows 2008 honeypot which is more or less the same, occasionally worse.
-
Thursday 28th September 2023 13:29 GMT Rol
Re: Stable not in the stable
My Windows 7 runs absolutely beautifully. It was a bootleg version to start with. It has never been updated and practically every application/game is bootleg.
Every snippet of internet connectivity has been wrenched out of it. It has never been online and never will.
As I guess most of the nasties that it is no doubt encumbered with have an overwhelming need to chat to a server somewhere, they sit dormant, patiently awaiting the day the police connect it to the internet and destroy whatever evidence it was they were looking for haha
-
Tuesday 3rd October 2023 08:20 GMT Anonymous Coward
Re: Stable not in the stable
Jokes on you there, the Police rarely boot up seized kit. They image the drives.
In fact it's the same for any organisation that may seize equipment. They create forensic images which are indexed and analysed in a forensic software package on a completely different machine.
I've worked with forensic images in the past on behalf of a legal team, you'd be surprised how easy it is...including drives encrypted with bitlocker using TPM because their tools can be used to extract keys from your TPM chip.
All the talk you've heard about banning cryptography / adding backdoors...it's all happening to make the use of these methods that have existed for years completely legal, rather than relying on a judge to decide whether or not it was warranted.
Historically, a prosecutor has always had to ask a Judge if they can be permitted to decrypt something, which means the judge hears out both sides before making a decision to allow it...I'd imagine it's probably possible to just decrypt a drive to bulk collect "evidence" now without asking for permission.
-
-
-
-
Wednesday 27th September 2023 07:12 GMT drankinatty
Re: Stable not in the stable
Well, it really depends on what function the box serves and whether it's public facing or not. I see both sides. I run Arch for servers and any box that has to have what the latest kernel provides. For a daily driver, I run openSUSE Leap on the laptop. (and a mix of Ubuntu and Debian spread across several Pi's and WSL installs). From the kernel standpoint on Arch we have 6.5.5-arch1-1 released a few days ago upstream, openSUSE has 5.14.21 in it's "enterprise" approach to backporting, the older Pi's have 5.10 on them still chugging away on buster. All get updates when then appear.
However, I've also had a few "back-office" boxes, the tired old workhorses from days gone by that are not public facing and just won't die. They do one or two things and do it well. Like, (remember the day), a 3.4.6 box that functions as a fax-server and backup DNS/DHCP for the LAN. The kernel hasn't been updated in years, but bind/isc dhcpd, hylafax and avantfax have. Since it's not public facing, it's not worth the wipe and reinstall until the drives croak (they are RAID 1 and the databases are backed up to the recent boxes anyway). Is it ideal, no, does it pose an undue security risk, no, not unless someone with physical access does something terrible to it - zero chance of that in my world. The functionality will die with that old box though, there will be no need for a fax server in any future box, and it not worth migrating that to the current servers.
So I can see both sides and it really depends on what the box does and its exposure to any threats. For anything public facing, update religiously, but if you have an old clunker humming away in some forgotten corner of the server closet that only talks over the LAN and perhaps a telephone line -- the updates are not as critical
-
Wednesday 27th September 2023 16:12 GMT Stuart Castle
Re: Stable not in the stable
Re "So I can see both sides and it really depends on what the box does and its exposure to any threats. For anything public facing, update religiously, but if you have an old clunker humming away in some forgotten corner of the server closet that only talks over the LAN and perhaps a telephone line -- the updates are not as critical"
I was going to say pretty much the same. There is no point in updating a machine that never connects to the outside world and works. Well, at least until something physically fails and you need to upgrade because you can no longer get replacement parts. We have machines like this where I work. We don't officially support Windows 7, but have a couple of Windows 7 boxes. These are isolated from the network. Although being on the network was a nice thing to have on them, the software we need to run on them does not require a network connection. The reason we haven't upgraded? They control specialist machinery. The software used is not compatible with anything but Windows 7, and the version of the software that is compatible with Windows 10 is not compatible with our machines. Seeing as the replacement cost for each machine is in the high five figures range, we opted to keep the old PCs off the network and running Windows 7 until something dies and we have to replace the machine.
But you do need to keep any machine that connects to the outside world up to date.
-
Wednesday 27th September 2023 21:53 GMT jollyboyspecial
Re: Stable not in the stable
I remember a guy running an ancient server for which updates were no longer available. He was adamant that it was secure as it wasn't public facing. It sat behind a firewall all of its own on the LAN. It has its very own DMZ. The corporate firewall did not know a route to the server. It was safe. He said.
Now this seemed quite an expensive solution to me. A hefty firewall and it's attendant licensing wasn't cheap and of course the licences were a recurring expense. I asked how much it would cost to update the software to run on a newer OS. He wasn't interested. He had his solution. It was safe. He said.
The firewall rules protected the server he said. Only clients on the LAN could access the server. He said. And that much was actually true.
However one day all hell broke loose. Somebody who took their laptop home suffered a zero day attack. Except it wasn't apparent. Working from home on an ADSL connection the laptop's quest to find and attack other devices on the network was not apparent. In the office however at LAN speeds the laptop's owner experienced a terrible performance hit. As did other people who's laptops and desktops got hit. The LAN switches were lit up like.a Christmas tree. Some smart arse spotted this was likely malware and started to pull the power cables on switches. It wasn't until the IT manager has funished fixing laptops and desktops with some newly updated AV software the following day that he bothered to look at the server. He b didn't bother looking at the server because it was safe. He said.
It wasn't. One of the attack vectors of the malware was a common port that was open on the firewall. The laptops and desktops were up to date as of last month. The IPS signatures on the corporate firewall were up to date so the malware couldn't get out to the internet from that. It couldn't even have got in from the internet, but it's somebody brought an unedited device into the office all bets were off. The software on that server was years out of date. The malware tore it a new one and so on day two of dealing with the infection or hero discovered that the system was no longer accessible. It wouldn't even boot.
There's safe and secure and then there's safe and secure.
-
Tuesday 3rd October 2023 08:40 GMT Anonymous Coward
Re: Stable not in the stable
"There is no point in updating a machine that never connects to the outside world"
There is, and it's called human nature.
I work in a lab where we have no choice but to rely on older operating systems because of the hardware being used...it's the type of hardware that is fucking expensive to upgrade...but the old hardware still works and provides the same features we need as the new stuff would...so we can't justify the expense (it would be hundreds of thousands of pounds).
Anyway, these machines are air gapped and have notes slapped all around them (as well as on them) to say "NEVER PLUG INTO THE NETWORK OR CONNECT TO WIFI".
Despite this, a couple of air gapped machines have been plugged into the Internet...usually by mistake...see, the interface with the hardware to the PC is via old school PCI...but the modules that make up the full "system" of kit, communicate over ethernet, and each module has a remote desktop and web ui for finer configuration...all the modules have XP Embedded on them....so the PC has to be connected to a switch (which has no uplink to any other network, it is simply an air gapped LAN).
The whole shebang sits on a trolley that can be wheeled around the lab to allow it to be used anywhere in the lab.
However...TWICE...someone has "accidentally" plugged the switch on the trolley into the internet connected LAN....not only does this cause a massive headache with DHCP...but it almost instantly causes the Windows 7 machine on the trolley to get infected...by instantly, I mean within about an hour.
There is no pirated software on that box at all and the software on there is tightly controlled, yet it still gets infected incredibly quickly.
First time it got infected, it was because of the engineer that was sent out by the manufacturer of the equipment...like a fuckwit, he decided to plug everything into to the internet in order to download new firmware...which he could have done on a laptop, then put it on a flash drive...but he was so fucking bone idle, he couldn't be arsed. This infection was easy to manage though, it was just a Trojan with a dropper and it never managed to "drop" anything before it was caught...either way, the machine was wiped and rebuilt using a backup image that took it back to it's original state...no major damage, just time lost.
Second time, it was an employee...they wanted to copy a file off the server to the machine, again, couldn't be arsed with a USB drive...this time, it was ransomware that quickly spread across the network. Again, caught early, nothing got encrypted that couldn't be recovered and as an added bonus, we caught it so early, we were able to extract the encryption keys....once again, machine was wiped and restored back to it's previous condition.
We've no idea how the viruses got in, the actual company kit is extremely locked down and is basically sterile...nothing on those machines can change without an admin stepping in. However, people do bring their own kit in and connect it to the wifi...we have no policy against this, because all the properly updated and locked down kit is more or less protected and we have outside businesses that hire the lab, and they tend to bring their own kit...the viruses almost certainly spread from external kit, but what gave those viruses the opportunity to do some damage, was a nice unpatched protected perch...which meant the virus could sit around for a bit, idly looking at the network and adjusting itself to better attack the network.
The thing with modern viruses, is they don't just need vulnerabilities to exploit, they also need time...and a nice unpatched box is a great place for a virus to kick back and spend some time adjusting itself before it attacks, while it works out which zero days it can take advantage of.
If we could continue to patch that box, we would...but unfortunately it's no longer possible. Mercifully, the kind of tests that the kit can perform are in less demand these days, so the box spends most of its life powered off these days...but every now and then, it is needed and we all dread it.
-
-
-
-
Thursday 28th September 2023 09:19 GMT Anonymous Coward
Re: Stable not in the stable
"If the kernel is dead in 2 years that's barely enough time for all the apps to have been built to go with it (if there are install dependencies, which there usually are)"
No, this is wrong. The branch of a particular version will still exist, there just won't be support for anything older than 2 years. So a branch could still be around for the same period of time, we just won't have as many older versions.
"I'm very much an 'if it ain't broke don't fix it type', so when I install a Linux, I get a bunch of packages to go with it, and turn off updates, unless there is instability straight away".
If a new version of a kernel is released, the one you fixated on is almost certainly broke...stability isn't the only consideration...security and performance need to be considered also...especially security.
It drives me fucking insane when projects fixate on a kernel or even worse, a specific distro, because it makes the embedded device that much harder to maintain over the medium to longer term...there are standards that change over time, for example SSL adding new ciphers and getting rid of old broken ones...things like this make it harder and harder to use the device over time because whatever you're connecting to the device from is getting updated and so expects newer ciphers etc to be there and will sometimes refuse to connect at all if an older cipher is no longer supported...there are two possible outcomes when this point is reached. Firstly, the embedded product is dumped and the customer is now off shopping around to see what they can replace it with (so you as the embedded developer can lose business, because by the time this point is reached, you can bet your ass someone else has a similar product out there that might be cheaper), secondly, if there is no alternative, I'm now forced to relax security on other devices on my network just to make running your device possible...which these days might be impossible if I have to adhere to a cybersecurity insurers stupid fucking requirements.
Either way, it puts a potentially massive problem at my feet.
The only time a fixation on a kernel might be ok is if your embedded device never has to connect to, or be connected to from another device...like a sensor with a display, an appliance like a microwave, or a kids toy or something etc etc...
I'm not sure which country you're in, but some countries have standards in the pipeline for minimum security requirements for embedded devices (think TVs, wifi access points, routers etc etc) which will be going into force very soon and freezing on a specific kernel will probably work against those devices when it comes to certification.
-
-
-
Tuesday 26th September 2023 16:51 GMT Liam Proven
[Author here]
> I still suspect that your Ubuntu, RHEL, SUSE etc will still carry on doing their own LTS kernel builds anyway
Yes, that is the point of the story.
What JC can't say out loud, ISTM, is that the enterprise distros _should_ be using the longterm kernels, and contributing their additional work back upstream. But they don't: they maintain their own instead, and then they keep the fixes for subscribers only.
ISTM that he is saying that the behaviour of the enterprise vendors is not merely not helping, but actually costing the FOSS kernel devs extra work and causing burnout.
-
Wednesday 27th September 2023 08:30 GMT gerryg
Enterprise vendors
Back in the day SUSE kernels used to have specific patches applied.
But AFAIK SUSE just use the stable versions. (I think there was some sort of announcement about 15 years ago) So really it's the usual suspect.
Randomly I've just watched a doc about Gary Kildall and as a result wonder if the usual suspects had played nicer if there would ever have been a demand for Linux.
-
Thursday 28th September 2023 00:41 GMT coredump
I read the comments the same way you did, though I went a little further: it seemed to me he was also at least implying (exercise for the reader, etc.) that the enterprise vendors are also duplicating effort and costing themselves some work cycles they wouldn't otherwise need to do if they followed the longterm kernels at least somewhat more closely.
Whether that was intended or not I dunno, but I could see how it might be. In some ways it might actually be more work to cherry pick and apply kernel patches, or at least more work to properly test and qa the results. I'm speculating of course, I have no idea how much time and effort is expended by the kernel devs or the enterprise kernel maintainers in those areas.
-
Friday 27th October 2023 04:41 GMT Claptrap314
They ARE spending cycles needlessly from our point of view. What is not being said is that there is a REASON that they are doing it--vendor lock in.
I'll tip my hat to the new Constitution
Take a bow for the new revolution
Smile and grin at the change all around
Pick up my guitar and play
Just like yesterday
Then I'll get on my knees and pray
We don't get fooled again
(The Who)
-
-
-
Tuesday 26th September 2023 16:54 GMT keithpeter
Dosh
Quote from OA
" If there are around a couple of thousand developers working on any given release of the kernel, and about 10 percent of those are newbies for each point release, that implies that as many again are leaving the project each release, burning out and quitting, or perhaps simply being redeployed to other areas by their employers."
Can't help thinking that there is a case for strategic funding of kernel development along the lines of CERN or the WHO or similar.
As a Brit living in the Midlands, I'm seeing an out of control vanity construction project eat tens of £billions with ever extending time lines and lack of any kind of delivery date. Strikes me a few tens of £millions a year per Western Industrial Country - paperclip money basically - would fund core kernel development reasonably well. Linux Foundation has the organisational infrastructure to deal with funding. Just needs a bursary scheme (3 to 6 years stipend equal to an appropriate salary - yes I know £100k+ or so - with remote working).
Of course, Meta and Alphabet could find that down the back of the couch any day. Perhaps a discount from the various anti-trust/monopolist fines could be arranged as a quid pro quo?
-
Wednesday 27th September 2023 01:46 GMT aerogems
Re: Dosh
It would be great if all the companies that relied on some open source library or program could see to donating a little money to the developers. If Netgear, for example, were to give even $100/yr to the likes of OpenSSL and Linux, it'd probably go a long ways. Not to mention all the other little projects. Even just $5/mo or something for larger companies would probably be huge for people like the anonymous person in my earlier meme photo post. Just pull something out of petty cash and donate it. Maybe it can even be written off and lower the tax bill a little.
-
-
Thursday 28th September 2023 09:49 GMT collinsl
Re: Dosh
Problem there is that any national government would start attaching conditions to the funding, like "You must not provide this kernel to <current enemy>" or "This funding is contingient on you building in a backdoor for us" or "you must not export this kernel patch outside this country"
It gets messy fast unfortunately.
-
-
Tuesday 26th September 2023 17:20 GMT Steve McIntyre
"If you aren't paying for it, just use Debian", says Greg K-H
- You HAVE to take all of the stable/LTS releases in order to have a
secure and stable system. If you attempt to cherry-pick random
patches you will NOT fix all of the known, and unknown, problems,
but rather you will end up with a potentially more insecure system,
and one that contains known bugs. Reliance on an "enterprise"
distribution to provide this for your systems is up to you, discuss
it with them as to how they achieve this result as this is what you
are paying for. If you aren't paying for it, just use Debian, they
know what they are doing and track the stable kernels and have a
larger installed base than any other Linux distro. For embedded,
use Yocto, they track the stable releases, or keep your own
buildroot-based system up to date with the new releases.
https://social.kernel.org/notice/AZDeSjvZ39K0vf1jKC
-
Wednesday 27th September 2023 06:49 GMT Pete 2
Backport
It is reasonable to expect that outfits of any size that rely on Linux kernels and apps to have the ability to build their releases from source. Without that they simply aren't in control of their own products.
.
So a basic interview question would be to have a candidate add a fix to an old kernel. It wouldn't need to be a trick question with multiple obscure (are there any other types) of dependencies. Just to demonstrate an ability to use developer tools.
-
Wednesday 27th September 2023 10:36 GMT Liam Proven
Re: Backport
[Author here]
> So a basic interview question would be to have a candidate add a fix to an old kernel.
Um. I am a bit at a loss here. I think you *DRAMATICALLY* underestimate the complexity of the task here.
You are proposing changing maybe a few hundred lines of code, spread over various places, in a project that, as of the latest complete version, has THIRTY SIX MILLION lines of code in it.
Citation as an example:
https://hackaday.com/2023/08/13/linux-kernel-from-first-principles/
How would your interviewee _find_ where to make the changes unless they had spent -- conservatively -- a few *years* learning their way around the world's largest FOSS project first?
How do you or they test their change? Compilation still takes time. On an *extremely* high end box it's under a minute:
https://openbenchmarking.org/test/pts/build-linux-kernel-1.15.0
On a typical laptop of someone hunting for a job, we're talking many minutes. I don't know how long job interviews take where you are but for me they are an hour or so max.
Consider that you have to boot the resulting kernel and do some trivial tests, or the exercise is pointless, that's your interview gone in one.
I think you don't understand what you are proposing here. Doing one of the most complex and difficult jobs in 21st century computer programming, something that I would expect there are a few thousand humans in the world that are able to do and where backporting a small patch might be a week or two of work for the initial patch...
And you want people to do it in an interview?
Perhaps you envision an interview process where the candidate moves into the company's offices for a couple of weeks and has a few days to set up a workstation as they like before the interview began?
-
Wednesday 27th September 2023 12:56 GMT Pete 2
Re: Backport
> I think you *DRAMATICALLY* underestimate the complexity of the task here.
No. See this as the sort of process that a properly maintained kernel source implementation looks like / requires.
And if an outfit doesn't have well organised and documented code repositories and processes, then any candidate worth their salt would spot that and they would end the interview.
-
Wednesday 27th September 2023 13:45 GMT Liam Proven
Re: Backport
[Author here]
> No. See this as the sort of process that a properly maintained kernel source implementation looks like / requires.
This is totally wrong.
That link is applying a kernel patch *to the same version*.
What you were calling for is *backporting* a patch to an older kernel version _as an interview test_.
That is a totally different proposition.
Integrating an existing patch to the version it was intended for: sure, fine, that is possibly a legitimate test, _if you're hiring a rookie kernel engineer_. Still quite a hard one, but all right.
Backporting one is a whole different thing and (off the cuff guesstimate) about three orders of magnitude harder.
-
Wednesday 27th September 2023 17:33 GMT Pete 2
Re: Backport
> Backporting one is a whole different thing and (off the cuff guesstimate) about three orders of magnitude harder
Since you are guesstimating that implies you have not done this.
I have backported plenty of stuff in my time. Some of it is tricky but certainly not three orders of magnitude (you know that means 1000 times?) harder.
Much of the stuff requires nothing more than applying the supplied patches and maybe adding some #ifdefs where appropriate. That would be a reasonable interview task to ask of someone applying for a kernel supporter post.
-
Friday 29th September 2023 10:25 GMT Liam Proven
Re: Backport
> Since you are guesstimating that implies you have not done this.
That is an entirely fair comment. I have not.
> I have backported plenty of stuff in my time.
Kernel patches, though? When we are essentially talking about a single binary containing tens of millions of lines of code?
-
Friday 27th October 2023 04:49 GMT Claptrap314
Re: Backport
If you honestly believe that that is enough to apply a kernel patch, then please leave the industry.
Until you understand exactly what the patch is doing. Until you understand exactly what the code surrounding the patch is doing in the current kernel. Until you know exactly how that task was being accomplished in the old kernel. You cannot possible understand how to test that the backport actually does in the old kernel what it is supposed to do in the new one.
-
-
-
-
Sunday 1st October 2023 19:40 GMT Martin M
Re: Backport
> Compilation still takes time. On an *extremely* high end box it's under a minute
In 1995 I submitted a trivial patch for a file system endianness bug in the Linux 68k port. It took a while, largely because a kernel recompile took over a day on my Falcon 030. I can’t remember how much RAM it had (2MB?), but it’s safe to say it wasn’t enough and a swapfest ensued.
I got into the habit of reading over the code a few times before kicking it off…
-
-
-
Wednesday 27th September 2023 15:20 GMT bjr
No big deal
People run distros not Linux and the distros all have their own LTS policies. They don't use the standard LTS kernel they fork their own and maintain it for as long as they like. If all of the major distros agreed on same LTS kernels and all worked to maintain those kernels then the mainline LTS kernel would mean something. But they don't do that. Redhat has their LTS kernels, Ubuntu has theirs, Google picks a kernel for Android every year, and they all maintain them for as long as they see fit. It doesn't make sense for the mainline Linux project to waste resources maintaining kernels that hardly anyone uses.
-
-
Thursday 28th September 2023 10:15 GMT Bebu
Re: Hold on Now!
《Who could possibly be tired of PowerPoint presentations and words like "synergy" and "leverage". They're the reason we get out of bed in the mornings, aren't they?》
Probably why I try to leave the latter until the afternoon :)
As the late Mrs Parker was reported to say in response to the doorbell ringing (and I suspect to the sunrise): "What fresh hell can this be?"
For me Teams, Zoom, Slack etc are the fiends of this fresh hell, Powerpoint having become a rather stale daemon.
-
-
Thursday 28th September 2023 04:26 GMT Henry Wertz 1
High stress
I saw a thread recently regarding the possible inclusion of bcachefs in the kernel, it was clear the developers are under extraordinary stress. They pointed out, as it stands, they have automated bug finding systems sending sometimes dozens of reports a week, with several filesystems having one full time developer maintaining them. The were not getting along terribly well and it was clear from their interactions that every one of them was heinously overworked and approaching the end of their ropes.
I don't know if this is typical of other subsystems; probably it is since they'd get simillar loads of automated reports presumably. I could see stopping maintaining 6 year support kernels given distros don't use them as a very reasonable thing to do.
-
Thursday 28th September 2023 10:59 GMT Bebu
Other options?
If one only really wanted a stable Unix/Posix kernel/userland system to build on the various enterprise linux distros are one option but another option might be a *BSD distro (FreeBSD has ~5 year support for major releases) or an IllumOS based distro (eg OpenIndiana.) I suspect the kernel internals and interfaces in the BSDs and IllumOS change far more slowly and with fewer breakages than might be the case with the Linux kernel.
As an example I was really impressed SmartOS (based on IllumOS) especially in that it supported *both* kvm and bhyve virtualization.
-
Friday 29th September 2023 17:26 GMT James R Grinter
It’s about Interface stability
Some of us- those who actually write or run applications- benefit from API stability, and (call me old fashioned) ABI stability.
That’s why the “enterprise” distributions are back-porting instead of continuously updating their kernel versions. Because everyone who isn’t just running a kernel for the sake of it wants to actually be doing something else with their computers.
Of course the resources to do that sustaining engineering effort are not trivial, it’s no great surprise some are pulling that work to behind closed doors. It’s just a shame that the options are being reduced just as commercially sponsored ones are disappearing from public access too.
-
-
-
-
Friday 3rd November 2023 16:45 GMT jake
The projected EOL for LTS kernel 6.1 is December 2026, but that is subject to change. If there is enough interest in keeping it going longer, this date may be extended out into the future. Rather than re-invent the wheel, you can read up on the whys and whens of active kernels here.
For the clicky-pointy adverse, here's the link for easy copy/paste:
https://www.kernel.org/category/releases.html
-
-
-