back to article Long-term supported distros' kernel policies are all wrong

A new hire at Rocky Linux creator CIQ is rocking the LTS-Linux-distro boat – by shining a spotlight on the elephant in the room (or one of the herd). A recent blog post from Rocky Linux developer CIQ, subtitled Cracks in the Ice, examines "Why a 'frozen' distribution Linux kernel isn't the safest choice for security." The post …

  1. Anonymous Coward
    Anonymous Coward

    I'm missing something

    If the FOSS community have switched to LTS versions where L = 2 years and RHEL (et al) want L to be 10 years then how does persuading RHEL to choose a FOSS designated LTS version change anything? After 2 years their back-port efforts won't be applicable just as at present?

    1. Nuno

      Re: I'm missing something

      They are cutting LTS to 2 years because they don't have the resources to maintain them.

      If enterprise Linux companies chip in with upstream patches, they don't need as many resources to maintain those LTS'.

      1. Liam Proven (Written by Reg staff) Silver badge

        Re: I'm missing something

        [Author here]

        > They are cutting LTS to 2 years because they don't have the resources to maintain them.

        > If enterprise Linux companies chip in with upstream patches, they don't need as many resources to maintain those LTS'.

        That is exactly right. Thank you, Nuno.

        The kernel team is dropping the number of LTS kernels and the lifetime of the kernels because it doesn't have the manpower to maintain them.

        Meanwhile, enterprise distros maintain their own different long-term kernels, which are *not* official kernel.org upstream LTS kernels. They do not use the upstream code and they do not contribute back.

        If RHEL, SLE and Debian chose the latest LTS kernel for each of their releases and then pushed bug fixes back upstream then we could have more LTS kernels supported for longer.

        They don't all need to use the same versions -- there are 2 a year and enterprise distros don't release anywhere near that often. They don't need to contribute feature improvements back upstream. Yes their internal versions will drift a bit but that's OK. The community still wins.

        1. Anonymous Coward
          Anonymous Coward

          Re: I'm missing something

          > If RHEL, SLE and Debian chose the latest LTS kernel for each of their releases and then pushed bug fixes back upstream then we could have more LTS kernels supported for longer.

          Lots of 'ifs' and 'coulds' is the bit that I was missing. The big distros aren't doing it now so it seems unlikely a plea from the kernel guys will cause them to change their minds.

    2. chasil

      IBM long-term vs. CIQ/SUSE/Oracle long-term

      The industry has grown familiar with RHEL-compatible LTS kernels, the source of which is now controlled by IBM, who would very much like to monetize it more strongly.

      The question is if CIQ/SUSE/Oracle (as members of OpenELA) are willing to provide an alternative.

      Oracle already distributes their own custom kernel (the UEK), but it does not occupy the same niche and it has a much smaller development team.

      1. Anonymous Coward
        Anonymous Coward

        Re: IBM long-term vs. CIQ/SUSE/Oracle long-term

        > The question is if CIQ/SUSE/Oracle (as members of OpenELA) are willing to provide an alternative.

        I would tend to think that they aren't, given that the raison d'être of OpenELA is to make it possible to copy RHEL. There's also at least one influential and vocal SUSE dev that is strongly against the entire fixed-release methodology of backporting selected fixes, and promotes his preference for rolling releases.

  2. RedGreen925 Bronze badge

    "Follow the money."

    Who would have thought that the parasite corporations greed is a source of problems. Must have been a new ground breaking AI used that lead this study to come up with the blinding obvious for literally well over a century of experience with it happening.

    1. Snake Silver badge
      Trollface

      RE: "Who would have thought"

      No, no, never.

      Don't you know that everything that happens today is original?? Shirley we could never learn from those *old* things back then!

  3. Martin Gregorie

    I've noticed odd happenings where Redhat Fedora and FOSS databases collide

    Specifically, about a month ago I was running Fedora 38 and wanted to update PostgreSQL to its latest version (14 or 15), but couldn't get that to work. Further research showed that PostgreSQL recommended not installing it on any release of Redhat Linux later than 38 but didn't give any reasons for this advice.

    Consequently, I decided to switch to MariaDB. Annoyingly, this hasn't helped because the current Fedora 39's MariaDB packages don't include a package which the current version of MariaDB requires for the process of creating a new database copy to complete. Despite this package being mentioned as a requirement in the database creation recipe, its not included in the Fedora 39 download library. This the situation is not at all helped by the mess otherwise known as the MariaDB manual lacking an index.

    I'm about to upgrade to Fedora 40 and then try again, but am uncertain which relational database to try installing after the upgrade: PostgtreSQL, MariaDB, or Firebird H2 or Derby.

    1. AMBxx Silver badge
      Facepalm

      Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

      If you're debating postgres vs H2 then you need to take another look at your requirements.

      1. Martin Gregorie

        Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

        Prior to the move to Fedora 39 and a simultaneous move new hardware (previously on an elderly Dual Athlon box and now on a Ryzen-based desktop) I was using PostgreSQL to implement an email archive, so a fairly simple DB schema: basically a single entity type (each instance an email, indexed by sender, date and title), updated once a day with new email after rejecting spam and mail ffrom unwanted senders. This worked fine until the the old box crashed and it turned out that there was an incompatibilitr brtween Fedora 38 and Postgres.

        I also wrote search tool and a data management toolfor the archive, both in Java and using JDBC because my DB loaders etc are all Java..

        I initially used PostgreSQL to find out about it (my DBA experience before this includes Access, IDMSX, Sybase, and RedBrick (one if the first data wharehouses).

        After an upgrade to Fedora 40 I'll see if the PostgreSQL incompatibility still exists. If it does, I'd appreciate experience with FOSS databases such as Firebird, H2, HSQL, or SQLite, though I'd prefer to stick with PostgreSQL since I have known schemas, JDBC modules etc have been well tested with it.

        Any suggestions and/or comments appreciated.

    2. Charlie Clark Silver badge

      Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

      A simpler solution might be to run it in a Docker container.

      1. Anonymous Coward
        Anonymous Coward

        Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

        simpler is to stop using rhat crap

        1. Charlie Clark Silver badge

          Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

          Which crap? Fedora? Postgres? Or Docker? I'm not a huge fan of Docker but it does make it easy to have unified developer environments, especially in cases like this.

          1. Bill Bickle

            Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

            I feel like this dialogue, about mismatching versions working together validates the concept that Red Hat, Suse and other Enterprise type Linux variants espouse. Which is that if you don't slow down and stabilize an operating system and provide enough runway for ISV's to test and support it, and you keep "advancing" it, then lots of things will break.

    3. Anonymous Coward
      Anonymous Coward

      Re: I've noticed odd happenings where Redhat Fedora and FOSS databases collide

      or try not using Rhat crap

  4. unimaginative
    Unhappy

    The snag is that implementing them would mean persuading billion-dollar companies to play nicely together.

    Be careful what you wish for. The snag with businesses playing nicely together is that, as Adam Smith pointed, it always ends in "a conspiracy against the public".

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      Yes, in the long run. But:

      "In the long run, we're all dead." (J M Keynes.)

      FOSS _can_ be an exception to this.

      1. Freddie
        Joke

        I'm not sure that running FOSS means that we'll live forever.

  5. Charlie Clark Silver badge

    FreeBSD got it right.

    Clear and consistenty strategy and systems that measure uptimes in decades. AFAICS Linux just has more drivers.

    1. BinkyTheMagicPaperclip Silver badge

      Re: FreeBSD got it right.

      That really depends what you're trying to get FreeBSD to do. As a desktop Linux is at least one order of magnitude of an improvement, and possibly more.

      As a server whilst FreeBSD has some decent technologies, it is not uncommon to find a similar functionality gulf between BSD and Linux.

      I'm trying to move off Windows 10, and use FreeBSD as my Unix of choice. Even in areas you'd expect it to be strong I'm running into situations where it's almost immediately necessary to write your own code or scripts, and any pre-existing work is applied to the very specific use case of whoever wrote it, with limited flexability or error checking.

      Thinking logically I would not recommend FreeBSD as first choice unless the BSD license offers substantial benefits. However, I'm a fan of BSD in general, so I continue to learn/waste my time getting it to achieve what I need. I *would* recommend OpenBSD, provided you're only using it for firewalling or its base networking, and you're not using wireless.

      For quality of experience I would very definitely rate it below what mid nineties OS/2 offered at the time.

      1. hedgie Bronze badge

        Re: FreeBSD got it right.

        Not unexpected, alas. As much as Canonical gets plenty of well-deserved jabs by Linux aficionados, they really *did* make it more of a desktop OS. More things "just worked" with Ubuntu[1] and it became far more accessible to even only the moderately techie. Most other distros now are pretty easy to get going and use as a daily driver. The projects I have seen to make a *BSD[2] along those lines are mostly defunct, and never had the kind of backing that *buntu has. Getting a *BSD to the point where someone sick of Windows and not wanting to deal with Apple to the place that Linux has reached would need cash, enough people working on it, and evangelists pushing it.

        [1] A family of distros that I haven't touched in at least 10 years, but was ground-breaking at the time for someone whose previous Linux experience was Yellow Dog on an old G4, and whose primary UNIX experience was and still is that proprietary oddness out of Cupertino.

        [2] And Macs don't count for this purpose.

    2. jaypyahoo

      Re: FreeBSD got it right.

      Also NetBSD :)

  6. Bill Bickle

    Can CIQ declare CIQ Linux as the best path and move forward ?

    I feel like CIQ needs to do what Red Hat, Suse, and Canonical do, which is "build from base Linux kernel and then create their own version of value and capability" and then go market and sell it and buidl an ecosystem around it. Versus coming up with ways to take potshots at Red Hat. Red Hat has the most successful commercial open source model in the history of computing and CIQ is a venture capital backed for-profit company. I wish it would just go compete on creating a better Linux if they think they have good ideas on a better path forward. Maybe that is what this story will lead to, which is them standing on their own as a unique company with value add.

    Kind of sick of this crud

  7. Anonymous Coward
    Anonymous Coward

    Tell Me About The Alternatives.......

    Open source, multiple potential suppliers.........sounds pretty good to me............

    ......compared with "walled gardens" elsewhere......

    Am I missing somthing here?

    1. theloon

      Re: Tell Me About The Alternatives.......

      yes you are missing much.

      Open does not automatically equal more robust or more reliable.

      Perhaps and 'open' mind to holistic requirements of real world use cases would be a more complete way to examine things....

  8. NickHolland
    Mushroom

    Long Term Support is Long Term Problem

    Having worked at a number of companies, I've seen the same pattern over and over with RH and RH-like systems: Long Term Support leads to unmaintainable systems. Same thing happens almost every time:

    1) Project is installed

    2) Patches are done regularly

    3) People who set up the system leave the company

    4) People keep using the old system as it was

    5) Deadline for the end of LTS starts creeping up

    6) Panic! We must update!

    7) Realization sinks in that no one understands how the system works, no one dares touch it.

    8) No updates are done

    9) Hope you get a new job before it "matters"

    10) Efforts to replace old system are nixed by management because, the old system is "working"

    11) the old system keeps on running.

    12) goto step 7

    Now, I'm sure someone has a system that's been running on RH for 15 years, managed by several generations of staff and upgraded from RH5 to RH9 in a timely manner, using carefully written documentation. But it is rare.

    I've seen all kinds of excuses as to why this won't happen THIS time, but ... go ahead, prove me wrong.

    Let's not forget the case where the application requires newer versions of add-on packages (i.e., PHP) than the LTS distribution provides. That's a great way to reach step 7 before the LTS is up. The OS may be patched, but the application updates require a new version of something that's not in the distribution.

    The only REAL way I see out of this trap is to do VERY regular OS and application updates -- at least yearly. Every time, someone new should do the update to test the documentation and shared knowledge, with the previous people on reserve in case something goes wrong. If you can't manage the update, you have the old one to live on while you figure out how to re-implement or replace the product.

    1. This post has been deleted by its author

    2. Richard 12 Silver badge

      Re: Long Term Support is Long Term Problem

      That situation is exactly why companies pay Red Hat for the LTS.

      They think they're buying support and assistance for updating. The problem is that said support and assistance generally doesn't exist, and they don't find out until five to ten years down the line.

      1. NickHolland

        Re: Long Term Support is Long Term Problem

        At a past job which was a paid RH shop, we found RH support to be 100% useless UNLESS it was about a licensing problem. So... we paid money to get assistance with the tools that shut our systems down to make sure we paid the money to get support for free software.

        To be fair, a current coworker assured me they were very useful to a former employer of his -- which had tens of thousands of RH licenses. So if you are really really big, perhaps you can get useful support out of RH. But in the hundreds count...nope.

    3. Bebu
      Windows

      Re: Long Term Support is Long Term Problem

      Addressing the dot points I think the organisational culture is often at fault here.

      The platform (eg RHEL 7.) and application is viewed as a simple project where the once the system is operational the job is done - contracts end or staff redeployed. The idea of running in parallel a development environment with the next platform (eg RHEL 8.) and prerelease versions of the application are seen by management as a total waste of resources.

      The only REAL way I see out of this trap is to do VERY regular OS and application updates

      I agree if you are referring to minor version platform updates (eg RHEL 7.9 -> RHEL7.10) and analogous application updates. Major version updates (eg RHEL7 -> RHEL8) can hold very nasty surprises and doubly so with equally courageous* updates of the application.

      RH by backporting fixes and features from later kernels (and consequently from glibc) reduce the number of such surprises between major EL versions.

      Nothing peculiar to RHEL or Linux here. Upgrading a DEC Tru64 system from 4.0g to 5.1a had a few suprises too (display postscript was dropped 4.0d but systems upgraded from 4.0d to 4.0g retained it but not from 4.0g to 5.1a - quite an embuggerance if you had an extremely expensive CFD package that depended on DPS.)

      At least with the proprietary Unixes you had a stable kABI and well defined (proprietary) hardware which had the one benefit of feasibly maintaining a fairly stable system over a decade or more.

      With FreeBSD I think you are stuck in 6 monthly kernel + world rebuilds if you wish to stay current. Probably not a big deal today with seamless failover and VM migration or process migration between redundant instances but was back in the day when bare metal was expected to stay up possibly years.

      As in "Very courageous Minister." sir Humphey

    4. jaypyahoo

      Re: Long Term Support is Long Term Problem

      Gald to found NetBSD. Been using it since 8.x easy to upgrade and understand OS. But my requirements are enterprise grade so only someone with practical experience know.

  9. Anonymous Coward
    Anonymous Coward

    While I understand the desire to be evergreen; a CEO of an outfit like Rocky should also understand that if it wants to take revenue from RedHat-like service contracts, that software certification "whether a patch is available or not" is a factor.

    For better or worse, the presence of specified versions of software IS a requirement in some places. Continuous patching on your home desktop? Fine. (And in fact, is what I do, bleeding edge of Manjaro... Which is not without it's problems!)

    It's not even that simple on say, the distribution you want a collection of office desktops or file servers using; because inevitably, patches to one thing will break other features until they also catch up. (See also, MS!)

    It is at moments such as this that the reasons for containerised distributions become much, much clearer. Dependency hell is, and will probably always be. While expensive on disk space, breaking out your compatibility issues into containers does resolve that particular headache, while also being acceptable in the eyes of those that need the specified version.

  10. Alex 72
    Linux

    Debian

    Just use Debian they already use the LTS kernels as a base and would work with anyone willing to help in good faith. Even though Debian will help you use it longer, maintain a staff that can handle an upgrade every 2 years. Organizations that can achieve this tend to be more secure and agile saving more than the investment in skilled Developers, Operations and Security staff over the long haul. In the cases where an unavoidable delay means you need to go beyond this parts of that same staff can help maintain the kernel whilst others work on the issue causing delay. Just my 2¢ YMMV

  11. frankvw

    Not just Redhat

    Speaking from experience, I have been forced to change my mind about LTS. For decades I've been an Ubuntian (also a position I'm currently reconsidering, BTW) and since 2012 I've exclusively stuck with LTS releases, based on the fact that it would allow me to postpone the PITA of having to deal with upgrades regularly. I have a tendency not to stick with out-of-the-box environments (especially where the GUI is concerned) and I also have had bad experiences in the past with partial upgrades that left a lot of accumulated crud behind which became progressively more difficult to deal with. No, I said to myself, I'm just going to stick with LTS and do a clean reinstall when my LTS version goes EOL, but that's years ahead, so I'll bun that bridge when I cross it.

    Fast-forward: I now keep running my LTS versions long after when I should have upgraded, because by the time regular support ends everything is so much out of date and I have so much to re-install from scratch that the job is just so much more daunting. In fact it was only when my laptop died I replaced its 12.04 install with 18.04. That was never the plan, but it did happen.

    Another unforeseen but MAJOR PITA is the fact that an Ubuntu distro may still be supported, but the repo's don't keep up. When your LTS version is older than two years, regular updates for most applications that run on that version tend to stop.

    So yes, there definitely is a downside to LTS versions and, based on a decade of experience, I no longer prefer them

    1. Yankee Doodle Doofus Bronze badge

      Re: Not just Redhat

      I've not run into the situation you describe, but likely only because I can barely (sometimes even fail to) make it the two years between LTS releases before getting bored and wanting to check out something newer, as far as desktop machines go. At home, where things are less mission-critical, I have landed on Arch, and am so far loving the rolling-release model. Sure, things break now and then, but I've always been able to fix it, and I enjoy the tinkering. In the office, I am on Debian stable (current version 12), and will likely stay there until version 13 has been out for a few months and then either upgrade or, more likely, do a fresh install. I've got my mom's old PCs running Lubuntu 22.04 (weak laptop), and Mint 21 (ancient but decent desktop). Sometime this summer I will probably wipe the laptop and do a fresh install with Lubuntu 24.04

    2. Anonymous Coward
      Anonymous Coward

      Re: Not just Redhat

      While hardly a critical case; I quite like the Vassal board game application. I've had that working on older releases of Manjaro without issue; however, to keep it running on the bleeding edge means exploring obscure configuration files - and indeed someone to tell you where to look.

      When your application not working means rebooting to other OS to run it, that's when all of a sudden it becomes really important to have an "LTS" release where all of the surrounding stuff that you actually use your operating system for matters.

      Pick something rather more important such as the German civil services adopting Linux; I am fairly certain they will want LTS releases in the sense that if their applications break, they will be ditching linux.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like