Please donate
Linux is a community effort but they could use the money, okay?
Linux kernel maintainer Greg Kroah-Hartman has responded to complaints that the current promise of two years for 5.10 is not enough, explaining that support is not automatic but requires commercial help. Version 5.10 of the kernel was released in December and designated a "long-term maintenance" release, which generally means …
And the smaller SoC players like Broadcom or Nvidia, don't provide anything financially to kernel development - and keep dumping badly written blobs they expect to be integrated into the kernel
Yet, it's Broadcom and Nvidia who are griping about lack of long term support
At the same time they're telling THEIR customers that if you want long term support, you gotta pay up
It's no coincidence that Nvidia acquired the most popular Linux-on-Network-Switch distro and that distro promptly dropped support for everything except Nvidia SoCs - it demonstrates what these people don't get about open source or communities
"Linux is a community effort but they could use the money, okay?"
From the article, the main gist of his request is commitment to use and test the kernel for six years. There's no point devoting resources to supporting a kernel for six years if no-one bothers to use it or test it.
Linux *mainline* is a community effort.
Supporting anything but mainline is basically what some maintainers do as part of their work funded by the Linux Foundation etc. because the big sponsors for those bodies want LTS kernels.
The silly thing here was the original posting thinking telling Greg that he should actually support 5.10 for 6 years because that would be useful for them and 2 years isn't long enough would result in Greg replying "oh you're totally right, I'll bump it up to 6 years just for you!".
If you are doing something on the hobbyist level you shouldn't be trying to use LTS kernels. You should be trying to follow mainline as closely as possible. If you are shipping products that rely on LTS kernels because tracking mainline just isn't physically possible you need to either offering the people needed to do the LTS work or sponsoring someone like Greg to do it.
The part that's new to me is "talking to some companies". Sounds like the complainant, or even Broadcom itself, isn't in the loop about LTS.
There should be a policy prominently displayed (like an asterisk and a footnote) that says "we can go to 6 if enough people sign up", as this exchange seems to indicate the policy isn't clearly indicated.
in the case of SoC-based distributions, one good exanple is Cumulus Linux
It was working well, until Nvidia acquired the company. Support for all non-Nvidia netwoprk switch SoCs was immediately rmeoved from the distribution
Another example is the SoC distribution used in Huawei (HiSilicon) based NVR/DVR/IP camerakit (which is virtually all of it), written and supplied by Xiaongmai (XMeye) - the entire ecosystem here is one big GPL violation and a series of lawsuits waiting to happen
it's actually not just about kernel support life and more about support contracts for warranties and publishing updates for devices.
... or more exactly, about raking in fees for support and at the same time NOT bothering to publish updates for devices in warranty (or post warranty) while claiming that a device X is still running "current" firmware because... see.. it's still running an actively supported kernel. (*cough* f.u. TP Link *cough*)
This is why they like the long 6-year support cycles, they get to claim that their device X is "current" when running a particular kernel version from 3 years ago that's still "supported" and that's proof that they "care" about updates and support contracts. Managers signing on those contracts don't usually check the minor versions or patch numbers, they just check the version numbers that match their contracts.
The moment the kernel becomes obsolete, it will become much easier to reveal such scams in support contracts and the lack of support / firmware updates, by simply pointing at the kernel version.
If it were up to me i would even trim the LTS tails of all those 6-years versions down to 2-years.
why do you care what kernel is in your TP-link device, especially if it is a supported version? I just checked one of my $100k storage arrays which is running a supported software from the vendor, Running 2.6.32 kernel(apparently built in 2017). It works fine, it's a pretty locked down system(managed to sneak my ssh key onto the system during a recent support call otherwise customers don't generally have linux level access to the system). I don't have concerns. It's by far not the most recent OS release for that platform but it is technically the most recent recommended release (Just had latest patches applied a few weeks ago) for that generation of hardware(the hardware was released in late 2012, system purchased probably 2015 I think).
I recently reported(again) some bugs with the software(not kernel related but to the storage functionality). Support said I can upgrade to the next major version to get the fixes, though I deem that too risky myself given the engineering group has told me themselves they don't generally recommend that version for this hardware. It does work, and is supported, but I'm a super conservative person with storage so I'd rather live with these bugs which I can work around then risk different perhaps worse bugs in the newer version. Unlikely to see such bugs given my use cases but it's just not THAT important to upgrade so I will run this release of software until well past end of life (late 2022), probably not retiring this piece of equipment before 2024.
(linux user since 1996)
I've always found that embedded devices have very old kernel versions, that is, they aren't "supported" at all. The companies just keep churning them out once they have software that mostly works.
I remember finding an open telnet port on a cheap webcam. It was running a 2.x kernel with an old version of busybox as the shell. I logged in using root/123456.
He added that he does not recommend using a single kernel version for more than 2 years "on systems that you actively support and maintain". He blamed "customer-unfriendly SoC vendors" for providing "millions of [lines of] out-of-tree code" that is specific to that kernel.
It'll probably never get done due to the nature of Linux (It would rquire the kind of colaboration that would not only be mutually beneficial but would benefit non-contributers and people who activly disliked the GPL, so probably won't happen), but a decent, well specified driver interface would really help here.
If, as a driver writer, I knew I could target an interface that was guaranteed stable for an amout of time that would make my life much much easier.
However, writing and maintaining such an intereface is hard, especially in the face of many competing interests and possibly with the decentralised nature of Linux development. It would require a well defined and stable spec to be created and then adheered to; would it be in the best intrest of all parties involved to make this happen? I dunno. I suspect it would take a Google or similar to actually do it. But would they fund such a thing, knowing it was going to be available for all, or would they put the effort into something they'll control? (They do have Linux to tide them over).
There was comment on a Windows article here once, where someone laughingly referred to the time when you had to use A FLOPPY to install the SATA drivers as part of the Windows XP setup,
This really stuck with me though. The first SATA spec was released in 2003, Windows XP late 2001.
So you were able to install an OS from 2001 on hardware that didn't exist when it was written, simply because of a well defined and stable driver interface. Can you imagine how much easier Android would be to upgrade if this was the case? How many more devices wouldn't be udinh olf vulnurable kernels as they could receive kernel updates in a timely manner because it would just be driver qualification needed rather than actual source code changes.
Also, just to be clear, I'm advocating a well defined, well specified binary interface, that remains consistant and is supported. None of this precludes the GPL, the 'in tree' drivers could slowy be updated to call the interface as required. (I've often heard GPL and in tree as an argument as to why a driver binary interface is a bad thing, but the reality is it's just bloody hard to do.)
Aside: I write all that as someone who's recently reinstalled linux to use as a PVR... and the flashbacks as I had to compile just the right version of the kernel to get just the right version of the hardware drivers to work were unpleasent. At least I only have 1 TV card in now, when I used 2 that required finding the just right kernel version that would compile against the new and old drivers.
I was installing in a VM with hardware passthrough for the card, and I so nearly just put windows on as I suspect that would have just worked... this made the young version of me extremely sad, but the reality is, the older I get the less time I have to spend 'make menuconfig'ing for fun.
In 1995 I put a SCSI podule in my then computer, it just worked... the drivers were on the card. In 2021 I put a TV card in my computer... 3 days later... it worked. That's the wrong way round!
>It'll probably never get done due to the nature of Linux
That's all down to the vendor. There are vendors like NXP and Xilinx that are actually working on mainline trying to get all of their stuff in, there are vendors like Allwinner that seem to actively avoid mainline but there is enough hobbyist effort to reverse engineer and support their stuff to the point where it's as good as or better than the officially supported stuff...... then you have vendors that are happy to stay on some old Android kernel forever. That's not a Linux problem, that's a vendor problem.
Stop using those vendors for your products.
>but a decent, well specified driver interface would really help here.
Why should mainline shoot off it's own legs to help vendors that have no intention to actually work with mainline? Linux doesn't break userland but internally it can change dramatically for example reworking entire subsystems that can't take the sort of features that people want to add to them anymore.
What you're saying is mainline needs to maintain a lot of compat guff to allow for a windows-esque driver ABI to help vendors that have no interest in mainline? Do you not see how messed up that logic is? People that want to improve Linux should waste their time on compat bullshit for vendors that don't give a crap. Would you do that work? Maybe if you were getting paid by one of those vendors... and if that were the case it would be a properiarty layer they would not release to the community.
FYI: those vendors already do this and the result is wifi drivers that are bigger than the rest of your kernel because of all the layers of HAL crap and reproducing of stuff that's in the kernel like mac80211
>I knew I could target an interface that was guaranteed
>stable for an amout of time that would make my life much much easier.
No it wouldn't. As a driver writer you would like to have all of the new helpers and generic code that has been added to make writing drivers easier and reduces the duplication for hardware that does the same function with a slightly different hardware implementation. For example: Would you want an ABI where you can present an block disk to the OS when you're writing an SD card host driver? No. Because that would mean you now have to rewrite the whole Linux MMC layer in your driver.
What you actually want is for someone else that actually understands the SD card spec to have come along and carved out all of the pieces needed to write an MMC layer driver for an SD host so you can write a driver by filling in the 5 or 6 callbacks it needs to drive your specific hardware. Actually having to look after your code after that is the payment for having someone else make your life easy in the first place.
And I can't be arsed to respond to the rest of what you wrote.
If vendors want their shit to work with mainline and LTS kernels coming off of mainline they need to work at getting their stuff upstreamed. It's simple as that. No one is going to waste time making their lives easier for the sake of it.
Why should mainline shoot off it's own legs to help vendors that have no intention to actually work with mainline?
I think the answer lies in this article. The maintainers are saying they're not going to maintain this release for more than 2 years unless people actually start using it. The people who can actually lend support / help / effort / money to the task of maintaining Linux - vendors - aren't interested in using it because they know their efforts (which focus mostly around device drivers) go to waste when the "controllers" of Linux go and bugger up the driver interface. Again.
What you're saying is mainline needs to maintain a lot of compat guff to allow for a windows-esque driver ABI to help vendors that have no interest in mainline? Do you not see how messed up that logic is? People that want to improve Linux should waste their time on compat bullshit for vendors that don't give a crap. Would you do that work? Maybe if you were getting paid by one of those vendors... and if that were the case it would be a properiarty layer they would not release to the community.
Google are currently developing a kernel module to host device drivers, with the aim of providing a stable interface for device drivers. That's a major vendor's response to the matter.
>Google are currently developing a kernel module to host device drivers,
Got a link for that? And you realise that isn't a new idea right? There have been shim modules for ages. There have even been non-vendor attempts at it like the wifi compat stuff.. Guess what; It never works out because you end up with a ton of linux in your module and you have to keep adding to that with every old kernel you want to support. You always end with what we have now: LTS kernel releases to help out when tracking mainline isn't possible and people whining that other people should do work for them for free because *reasons*.
That's all down to the vendor. There are vendors like NXP and Xilinx that are actually working on mainline trying to get all of their stuff in, ...
You're confusing the collaborative nature of the GPL with a decent interface. It's good that vendors are contributing, the colbaration is a good thing in general.
However, that is not the issue. You can have everything in tree and still have a well defined interface.
>stable for an amout of time that would make my life much much easier.
No it wouldn't.
Why wouldn't it? Hardware doesn't change that often, your argument seems to boil down to things change? If I have a driver for a network controller for example, why can't the underlying interfaces be kept stable or versioned? Why do I have to currently know what the kernel 'looks like', so long as I know interface version X is there, I'm good... so long as interface X is there in future then you can just load my driver... no fuss.
NetBSD and FreeBSD provide good example of how this can be done at the kernel level. Their binary interfaces, whilst still less exposed as a windows style fixed ABI are much cleaner and well defined. (So you can just generally use the same network driver on whichever machine you're running on as the underlying bus discovery and communication is handled at a lower layer, with a well defined and consistent interace).
As a driver writer you would like to have all of the new helpers and generic code that has been added to make writing drivers easier and reduces the duplication for hardware that does the same function with a slightly different hardware implementation.
As a driver writer I want a consistant interface so I don't have to rewrite the driver every time a new kernel is released. If I know I need to trivially re-engineer my driver every x years as that's the life of the ABI, that's fine. Doing it every few months becomes a less practical use of my time, especially if I'm being paid, which just compounds the problem of out of date software running when it shouldn't be.
For example: Would you want an ABI where you can present an block disk to the OS when you're writing an SD card host driver? No. Because that would mean you now have to rewrite the whole Linux MMC layer in your driver.
I don't see your point here... I would like an ABI that I could present a block structured storage at if I'm writing a driver for one, as these are quite common and I could write a driver to support a new one easly. If I'm implementing a new class of hardware then of course I need new interfaces. But your argument would seem to suggest that supporting a new hardware and having a decent driver interface isn't possible. Windows/macOS would suggest otherwise. So in your example, I'd have a stoarge class driver that implements the MMC layer, and exposes a well defined interface to the SD card host driver.
You could then not only be sure your drivers would work going forward, but you could swap out my MMC layer with your own if you so chose, as the interfaces would be well defined and remain consistant. This is just good software engineering, It really has absolutly nothing to do with open source or the GPL.
It's also a lot of work that noone want to do for free.
This goes back to my point of loading SATA drivers on Window XP, that had no concept of SATA. Can you not see how that is a good thing? Most USERS of Linux don't care about the GPL or 'upstream source' or vendors being good citizens, they just want their hardware to work, and be secure.
Like I said, it's hard.
What you actually want is for someone else that actually understands the SD card spec to have come along and carved out all of the pieces needed to write an MMC layer driver for an SD host so you can write a driver by filling in the 5 or 6 callbacks it needs to drive your specific hardware. Actually having to look after your code after that is the payment for having someone else make your life easy in the first place.
Yes. That's exactly what I want, but as well as that I want STABILITY... so those interfaces are well considered and extensable and either versioned or unchanging (lets be fair, versioned...)
(You do see that the 5 or 6 callbacks are a driver interface? JUST KEEP THEM STABLE).
A driver layer is exactly that, it's having someone else look after the bits I don't care about and not having them change all the time. This is why I say it's hard, it _does_ require a lot of work to make other peoples lives easier.
I can load a WDDM1.0 driver written for Windows Vista on Windows 10, I can't load a WDDM2.0 driver on Vista as it supports newer features that the Vista kernel doesn't know about.
Don't get me wrong, I understand your point of view (I think you conflate well designed code with open source, but hey), but unfortnatly it will be the death of Linux.
Google are writing a replacement OS for android, they're also working on a binary driver layer for Linux to SOLVE THIS EXACT PROBLEM. (I wouldn't be at all suprised if this is a Fuscia driver layer ported to Linux, that would be the smart thing to do from their point of view...)
And I can't be arsed to respond to the rest of what you wrote.
That's okay, I never asked you to. It's a forum on a tech news site, we're all just angrily shouting into the void really. No one cares what either of us think.
If vendors want their shit to work with mainline and LTS kernels coming off of mainline they need to work at getting their stuff upstreamed. It's simple as that. No one is going to waste time making their lives easier for the sake of it.
Just read that last sentance again. THAT is the problem. (Hint: Massivly reducing the workload of a lot of other developers isn't a waste of time).
The reason you might think NetBSD has stable interfaces is, well, because NetBSD is basically dead at this point. To use it as an example of something people would seriously consider writing drivers for outside of the hobbyist space is hilarious.
Anyhow, what you are asking for is a module interface that exposes raw bus interfaces.. which is already there and is already *stable* because that stuff has been in use for a long time.
The problem is people that actually write drivers don't want to do that unless the explicitly don't want use the proper subystem for something. i.e. Nvidia shouldn't need a shim driver and their drivers would probably be smaller if they didn't use a shim but they have secrets they want to hide apparently.
No one is mainline is going to waste their time on something that would only be used by the people that would rather maintain an awful shim than upstream.
Your driver won't break between kernel versions if you actually upstream it because whoever makes breaking API changes will also fix your driver otherwise the changes won't go in.
Anyhow the kernel contains a document that already covers this. I think you should try to send an email to it's author and see what happens.
https://www.kernel.org/doc/html/v5.10/process/stable-api-nonsense.html
"It'll probably never get done due to the nature of Linux "
Actually it's a direct shot at chinese (Taiwanese and PRC) software priates like Huawei and TPlink who think that GPL means "public domain" - and argue exactly this line as reasons for refusing to either provide source code, not pay for development and as reasons for their having stripped attributions out of the code
Huawei's Hisilicon subgroup in particular are one of the largest linux piracy operations in the world
Huawei's Hisilicon subgroup in particular are one of the largest linux piracy operations in the world
I always think this is a shame, it's not like there aren't alternatives that they could use without falling foul of the GPL.
I wonder if this will come back to bite them 10-15 years down the line? As I understand it the whole concept of 'copyright' is still fairly new in China, but as they're now starting to lead the world in various areas I suspect the concept of IP may quickly be learnt. :) :)
It isn't just a tech problem though, the (gloriously un-ironically named) CCTV (China's state broadcaster), found itself with a very large bill recently for similar reasons.
I use the Ubuntu LTS releases for the simple fact that I don't want to reinstall the OS every 2 years. (Currently running 18.04 and ok with it.) I can see the poster's complaint - why should I bother with something that won't get supported in 2 years? And no, 2 years isn't "long term", regardless of the sophistry used to argue that it is. I understand that an EOL date can change, but I don't really expect it to, and my planning will be based on the current EOL date.
Me too, hell on my laptop I ran Ubuntu 10.04 LTS way past end of life(didn't want Unity, eventually installed Mint 17), and only in the past 3 months I think did I install Mint 20 (was on Mint 17 before so ran it a good 18 months or so past end of life). So far Mint 20 has more bugs that affect me than 17 did, but whatever no deal breakers(and really nothing new experience wise that makes me happy I upgraded). I do maintain my browsers separately(manually) from the OS so I do get updates (running Palemoon, and firefox ESR and seamonkey at the same time for different tasks).
When I was on Mint 17 I actually locked my kernel to an older release(4.4.0-98) and ran that for a solid 3 years because I was tired of shit breaking randomly after upgrades(mainly sound not working after more than one new kernel upgrade, using a Lenovo P50 laptop). I would of probably stuck to the 3.x kernels on Mint 17 but had to upgrade to 4.x to get wifi working (something I didn't realize for the first 6 months until I traveled since I never use wifi at home with my laptop always ethernet).
Have been annoyed for so long with Linux's lack of a stable ABI for drivers. I know it'll never get fixed I've been using linux for almost 25 years now but it still annoys me. Fortunately these days server wise most of my servers are VMs. It was so frustrating back in earlier days having to slipstream ethernet and storage drivers into Redhat/CentOS kernels to kickstart systems and having to match up drivers with the kernels(even if it was off by a small revision it would puke). I think that was the last time I used cpio.
> why should I bother with something that won't get supported in 2 years? And no, 2 years isn't "long term", regardless of the sophistry used to argue that it is. I understand that an EOL date can change, but I don't really expect it to, and my planning will be based on the current EOL date.
There seems to be a blurred line between Canonical's support for its distro and the support GKH wants for the mainline kernel offshoots (and who he expects it from) here.
The poster writes with specific regard to the kernel.org announcement having a two-year EOL statement for the 5.10 kernel at time of writing. The point he raises is that it doesn't also say "this could become longer" at the page the article quotes him as citing. Like myself he may have not found it elsewhere in a quick search either.
As a distribution end user, you need not worry so greatly: if Canonical adopt kernel 5.10 for the next Ubuntu LTS release, GKH will get the help he needs to change the lifespan announced at kernel.org from them, and similarly Ubuntu will continue to offer the lifespans you're used to enjoying; everybody wins. If a more preferable alternative turns up or 5.10 itself doesn't permit a similar promise* it will get avoided, and you still win.
Canonical may have statements elsewhere that elaborate; I personally see no need for end users to worry unduly over the above.
* on a related note it's already possible that 5.11 could be the stable kernel that ends up in Ubuntu 21.04 according to https://www.debugpoint.com/2020/11/ubuntu-21-04-features/