back to article Oracle releases experimental next-gen kernel build

Oracle's Linux engineers have released their build of kernel 6.9 for Oracle Linux – and they're already planning for 6.10 and beyond. In April, Oracle updated its own kernel build for Oracle Linux, the UEK-next kernel, which is a continuous integration Linux kernel release. This has just borne fruit in the form of a new …

  1. Pascal Monett Silver badge

    Btrfs

    Excuse me if I am a bit confused, but how many file systems do we actually need ?

    I understand why FAT fell by the wayside, and I get that NTFS might not be the best of breed, but it's been since the 1960s that we've been studying the problem.

    Why don't we have a good, reliable, standard file system yet ? Why does every vendor still have their own ?

    1. Spazturtle Silver badge

      Re: Btrfs

      We do it is called ZFS, but it is under a licence that is incompatible with GPL so it can't be added to Linux.

      ext4 and NTFS have no built in fault tolerance so you will slowly get silent data corruption. This is becoming more of an issue as we move to SSDs with even more bits per cell. SLC will retain data for ~15 years but MLC, TLC and QLC each have worse data retention. (People with Nintendo Wii Us recently found this out when they tried to download software ahead of the shut down of Nintendo's server and found that the console would not boot as the MLC chips had discharged.)

      Btrfs was meant to be the Linux FS of the future but it keeps having data eating bugs.

      F2FS is designed for systems where there is no SSD controller so the OS can directly address the NAND. So it is only useful for small devices, it is also quite a simple FS.

      Microsoft picked up the new filesystem shotgun and managed to blow of their own leg with it when they created ReFS and so they gave up on making it the new default and crawled back to NTFS.

      1. JamesTGrant Silver badge

        Re: Btrfs

        That’s a bit poetic - ZFS was Sun Microsystems, which is now all owned by Oracle.

      2. Anonymous Coward
        Anonymous Coward

        Re: Btrfs

        >> "We do it is called ZFS, but it is under a licence that is incompatible with GPL so it can't be added to Linux."

        ZFS is also overly complex and isn't free from its own problems.

        It's also worth remembering that Sun designed for managing a large number of spinning rust drives in a software RAID configuration, back when Sun made the decision to jump from hard drives using parallel SCSI to hard drives with Fibre Channel interface (for which there weren't any RAID controllers) and that paying Veritas for their file system wasn't a good option. Which is why ZFS was designed the way it is, aimed at large storage arrays using spinning rust.

        >> "ext4 and NTFS have no built in error correction so you will get slow silent data corruption."

        That's a silly statement because getting silent data corruption is far from guaranteed, quite to the contrary, actually. Because in an average PC, literally every transmission path inside the machine is error-checked, and this includes disk controllers. The only area where that's not the case for regular PCs and laptops is RAM, because vendors have decided that saving a few bucks by not using ECC memory is a good idea because now they can charge an inflated premium for ECC memory in servers and workstations. Memory is also susceptible to errors for a number of reasons, and when the error is in a data block that is about to be written to mass storage then corrupted data ends up on the disk. Which is why we still see occasional silent data corruption.

        However, it's also something ZFS doesn't protect against, because it only checks whether data once written to mass storage has corrupted (which, due to the error checking systems everywhere, really only happens if there's a hardware problem). Which means ZFS without ECC RAM is pretty much worthless.

        On the other side, ext4, NTSF, XFS and most other file systems without built-in error checking perform perfectly fine on systems with ECC RAM such as servers.

        >> " This is becoming more of an issue as we more to SSDs with even more bits per cell. SLC will retain data for ~15 years but MLC, TLC and QLC each have worse data retention. (People with Nintendo Wii Us recently found this out when they tried to download software ahead of the shut down of Nintendo's server and found that the console would not boot as the MLC chips had discharged.)"

        Not really. SSD data retention is only an issue for powered-off drives (as long as the drive is powered then GC will make sure the data is cyclically refreshed), and the lesson here is that SSDs simply make for very poor cold storage mediums.

        Nothing of this is related to the file system because when SSD cells lose their charge then ZFS can't do zilch to bring your data back.

        >> "Btrfs was meant to be the Linux FS of the future but it keeps having data eating bugs."

        There's a "data eating bug" which affects its RAID5/6 software RAID mode, which is easily avoidable by using mdraid or hardware RAID instead. The other BRTRFS RAID modes are unaffected.

        Other than the RAID5/6 mode, BTRFS has been pretty reliable (we have large storage systems based around it), which isn't really surprising when considering that SUSE has been using it as the supported default file system for its enterprise Linux for many years.

        >> "Microsoft picked up the new filesystem shotgun and managed to blow of their own leg with it when they created ReFS and so they gave up on making it the new default and crawled back to NTFS."

        Yes, ReFS was a crapshot, made worse by the version chaos which breaks interoperability between different Windows (and ReFS) versions. But then, this is Microsoft, who probably just realized that its customer base happily carries on on NTFS for as long as it takes to get them all into the cloud, so why waste resources on improving ReFS?

        1. Spazturtle Silver badge

          Re: Btrfs

          Btrfs, ZFS and ReFS have checksumming and parity checks, if the copy of a file on one disk becomes corrupt the filesystem will replace it with the non-corrupt version from the other disks.

          Even on a single disk ZFS has checksums for all data, and metadata is stored with a redundant copy so at a minimum you can detect data corruption. You can also enable on disk redundancy for all data.

          Also I have yet to see actual documentation of an SSD controller that shows that they refresh old data.

        2. JoeCool Silver badge

          "Overly Complex" ?

          I'm not taking the say so of an AC on that.

          I use OpenZFS, and it's architecture and management interface is brilliant. I can't think of an OSS project that gives me as much complete functionality as simply as does ZFS.

          "aimed at large storage arrays using spinning rust."

          I think the design is far more general than that. Small arrays work very well also. At that time there was nothing other than spinning rust, but so what ? Disk access is disk access is storge access.

          "when SSD cells lose their charge then ZFS can't do zilch to bring your data back."

          Like any drive failure, that's why you use the "array" thingy you seemed to dismiss.

          "Which means ZFS without ECC RAM is pretty much worthless."

          FFS. I am done.

    2. Baggypants

      Re: Btrfs

      You're assuming one filesystem is a good fit for every use-case in the enterprise. Different filesystems have their own pros and cons depending on what you need it to do. ext4 is best if you need high legacy compatibility, xfs is great if you need to wrangle huge files or filesystems, btrfs is great if you need advanced storage capabilities at the filesystem level, like copy-on-write or shanpshots. Very useful in vm or container environments. There are others with their strengths and weaknesses.

      1. Smirnov

        Re: Btrfs

        >> "xfs is great if you need to wrangle huge files or filesystems"

        XFS is also great for smaller files (XFS used to be slow when moving small files, but that has no longer been true for over a decade).

        I'd go as far to say that XFS, not ext4, is probably the best choice as a general use file system unless there's a specific reason to use something else.

    3. ChrisElvidge Silver badge

      Re: Btrfs

      We had ReiserFS some years ago. Pity no-one picked it up after the great imprisonment.

    4. karlkarl

      Re: Btrfs

      > Why don't we have a good, reliable, standard file system yet ? Why does every vendor still have their own ?

      We have the UFS (Universal File System) featured by a number of Unix operating systems (UnixWare, BSD, Solaris).

      ... No, they are not compatible with one another ;)

    5. Liam Proven (Written by Reg staff) Silver badge

      Re: Btrfs

      [Author here]

      > how many file systems do we actually need ?

      Aside from the point that this story, and this kernel, isn't about filesystems at all, really, I think that your question betrays what seems to me to be an implicit assumption, one that underpins a lot of the modern computer world but is erroneous:

      That what we have today is essentially fine, perfected, and ideal; as a result, all that is left is room for minor incremental improvements.

      This is manifestly false. It is _obviously_ wrong.

      * If existing xNix packaging tech was fine, we would not have Snap, Flatpak, OStree, Appimage, Nix, Guix, etc.

      * If existing desktops were fine, we wouldn't have 20+ Linux desktops in regular maintenance, some of them almost cosmetically indistinguishable from one another but implemented using different toolkits or languages.

      * If existing text editors were perfect there wouldn't be 5x as many as there are desktop projects.

      * If init systems were a solved problem there would not be systemd advocacy.

      * If programming languages were done and dusted then Rust, Go, F#, Zig, Hare, Clojure, and a hundred others wouldn't be in active R&D.

      Etc. etc.

      There are tons of problems, issues, imperfections, snags, drawbacks, penalties and whatnot with current filesystems.

      Examples:

      1. Most were designed for spinning media but now most computers use SSD storage

      2. Most natively only handle single drives and rely on other layers to handle and manage >1 device

      3. There are multiple such layers as well (LVM tools) and they too have issues.

      4. Some filesystems are advanced, strongly advocated by some users, but lack tools rivals consider essential. E.g. Btrfs has no meaningful useful way to repair a volume, and yet, it also can't accurately report free space, making it easy to fill a volume, which inevitably corrupts it.

      5. Other FSs work great but are hobbled by legal restrictions, e.g. ZFS; some are highly proprietary, e.g. NTFS, APFS, and may be inaccessible from FOSS.

      6. Others are unfinished but may in time fix this, e.g. bcachefs and Stratis

      7. Others are aiming still higher, e.g. HAMMER2 which aims to be mounted live by >1 OS over a network and gracefully handle that

      There is *lots* left to do.

  2. FuzzyTheBear Silver badge
    Flame

    Didn't have enough yet ?

    I don't understand why people should do business with Oracle at all.

    They way they treat their customers , i am surprised they have a single left.

    1. williamyf Silver badge

      Re: Didn't have enough yet ?

      While I DO advocate to avoid Oracle like the plague, they bring some good technologies to the fore, and has very robust operations support.

      if you do need one or both of those things, and have the money to pay, then go for it.

      otherwise. Avoid.

    2. jailbird

      Re: Didn't have enough yet ?

      Business? Unlike Red Hat, Oracle Linux isn't paywalled, it's as free to use as Alma or Rocky. You just don't get support without paying.

  3. chasil

    El Repo Mainline

    An alternative to the UEK is the El Repo Mainline, currently offering Linux kernel version 6.9.8 for RHEL-compatible v8 and v9 platforms.

    The ML kernel is "a kernel of last resort" for hardware problems and driver development. It will never, ever become a production kernel for the release in question.

    At several points in the past, btrfs support was better in ML than the UEK. The bcachefs filesystem itself is better than btrfs, and ML will likely be the first place to get it on RHEL.

    https://elrepo.org/wiki/doku.php?id=kernel-ml

  4. Bill Bickle

    Three stooges ?

    Is this part of Oracle teaming up with Suse and the Rocky/CIQ companies to try and take Enterprise Linux away from Red Hat ? I am pretty sure the top folks at Oracle don't like having to say their Linux is "Red Hat compztible" (who's your Daddy kinda thing), and are maybe hoping the whole world would dislike and not trust Red Hat and start looking for a better alternative - honestly don't see how Oracle wins that contest, but it is a free country here !

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like