back to article Linux is about to lose a feature – over a personality clash

The first release candidate of Linux 6.17 is out, without any bcachefs changes… but not for any technical reasons. This is bad. As we reported recently, kernel 6.17 is approaching. Linus Torvalds announced the first release candidate on Sunday, August 10. Of course, he is irked about something – that's not unusual – but this …

  1. NewModelArmy Silver badge

    Anecdotally... No To BTRFS Too

    I have run Fedora as my main OS for ages, and when i implemented an SSD the default installer installed BTRFS. I have had multiple crashes which recovered and some did not, due to BTRFS. After the first non bootable crash i was able to recover the data and installed again. After the second i could not access the data. So i built a new machine and ensured EXT4 was the file system used.

    I do check the Fedora forums to see whether BTRFS has caused problems, and there are sometimes reference to it, but many people are not aware that BTRFS is the issue, and just simply state the PC crashed.

    It would be good for Fedora to default back to EXT4 for their installer as changing from BTRFS to EXT4 means that for the average person, they will have difficulty.

    1. DS999 Silver badge

      Re: Anecdotally... No To BTRFS Too

      Yep I've continued using ext4 because of its stability record. Maybe other filesystems will have better performance but my filesystem performance has increased by so much over the years going from HDD to SSD and now NVMe that even if I could magically get a full order of magnitude better with another filesystem wouldn't be worth it to me if it meant even the smallest risk to my data. I mirror things to avoid data loss from failure of hardware, but if the filesystem itself is the cause of issues mirroring ain't gonna do much good!

      1. jake Silver badge

        Re: Anecdotally... No To BTRFS Too

        I've used ext4 on Linux desktops since it first appeared on Slackware 13.0 in 2009 (earlier on text boxen) with absolutely zero filesystem problems. At the current moment in time, I see no reason to change despite trying all of the alternatives, on various machines, for various reasons.

        Admittedly, the servers often have alternates, also for various reasons.

        I don't currently have any disks formatted for either bcachefs -or- btrfs. I see no need.

        1. Grogan

          Re: Anecdotally... No To BTRFS Too

          I have never lost one byte of data on ext2 through ext4. I always stick with "ext" filesystems because they are usually the best supported (by the kernel devs) and most reliable. Distributions are silly for making filesystem du jour the default. Some of them did that with reiserfs too back in the day, which was fine and dandy until you had corruption and reiserfsck finished it off for you.

      2. MMM4

        Re: Anecdotally... No To BTRFS Too

        The main value of newer filesystems like ZFS, btrfs, bcachefs, AFPS, etc. is not performance, it's the new features;

        - Copy on Write with snapshots, deduplication, etc.

        - Integrated volume management. No more LVM, complex RAID configuration etc.

        In simple cases, you don't need some of these features. And you don't think you need the other features until you've tried them.

        1. FIA Silver badge

          Re: Anecdotally... No To BTRFS Too

          You forgot one:

          - Full data checksumming.

          Like many people, my home server is an ex desktop machine.

          It has DDR4 memory, some crucial stuff that goes at 3200MHz. Like a lot of cheaper gaming memory, this speed is provided by an XMPP profile, i.e. it's technically an overclock.

          The memory worked fine in my desktop for years, but when used in the home server with 9 SATA drives all performing I/O at the same time errors started to creep in.

          The machine didn't crash, all that happened is ZFS started detecting corruption and taking volumes offline.

          Had I still been on ffs (I'm a FreeBSD user) I'd've slowly been corrupting random files without realising it. I may never even have noticed as the occasional crashes would just have been put down to a machine being left on 24/7 that wasn't designed for it.

          1. l8gravely

            Re: Anecdotally... No To BTRFS Too

            I actually view the including of RAID into the filesystem to be a mis-feature. Just look at all the problems btrfs has had with their RAID5 implementation. And ZFS has had major problems with expanding or growing (or shrinking!) zpools much less filesystems due to design issues. Use MD for mirroring/RAID, LVM for logical volumes and filesystems on toip . Three layers, it's not that complicated and it gives you more flexiblity and control.

            1. FIA Silver badge

              Re: Anecdotally... No To BTRFS Too

              The argument with ZFS is that because the FS understands the underlying block layer you can do things like avoiding the RAID write hole issue without needing battery backed RAM.

              You've been able to expand RAIDZ arrays for years if you want to grow the entire array. You replace each device with a larger one, then the raid will expand into the space. I did this years ago with my current storage at home. Moved from 4TB disks to 10TB without a reformat.

              The latest ZFS has introduced RAID growing too, so you can now add extra disks to an existing array to expand it, although it makes less optimal use of the space than creating a fresh array would.

              Shrinking volumes is not included by design, as it's a very very bad idea. ;) (It only takes one bit of data to get missed on the shrink and your FS is hosed. ZFS was designed for big tin, so priority was given to data integrity over features like this).

              Unless you absolutely can't afford it, it's always better to re-create and copy the data than try and shrink in place. If you can't afford it, and your data is critical you're doing something wrong and probably need to re-prioritise.

              If you're a home user play, that's what BTRFS is for. ;)

    2. cyberdemon Silver badge
      Linux

      Re: Anecdotally... No To BTRFS Too

      Been using BTRFS for years (ever since reiserfs was er, deprecated for political reasons) and never had any problems whatsoever with it

      EXTx on the other hand, seem to need fscking every now and then to keep it in working order

      1. Androgynous Cow Herd

        Re: Anecdotally... No To BTRFS Too

        I miss reiserfs.

        It was killer!

        1. coconuthead

          Re: Anecdotally... No To BTRFS Too

          Upvoted for the joke, but my installation of ReiserFS scribbled all over itself due to a bug.

          1. Orv

            Re: Anecdotally... No To BTRFS Too

            I had a number of issues with ReiserFS back in the day, but the problem turned out to be a bad IDE controller. So I can't blame it for the corruption, but I can say it did not recover gracefully.

        2. Anonymous Coward
          Anonymous Coward

          Re: Anecdotally... No To BTRFS Too

          Hans up if get it

      2. FIA Silver badge

        Re: Anecdotally... No To BTRFS Too

        Been using BTRFS for years (ever since reiserfs was er, deprecated for political reasons) and never had any problems whatsoever with it

        It seems the bits that work seem to work okay, but there's deffo bugs in some of the more esoteric RAID code. I used it for MythTV a while ago so I could just chuck more storage in, again, had no issues.

        I also had sucess with XFS too for myth, but that was due to ext4 not deleting large files quickly.

        EXTx on the other hand, seem to need fscking every now and then to keep it in working order

        I'll be honest, I've been quite shocked at the genuine love for ext4 that some people in this thread seem to have. Personally I put it next to FFS/UFS in my list of filesystems I'd use. They're fine but I'm going to have to fsck them every so often, especially if something crashes. (I've had to fsck journalled ext4 before).

        I'm not a heavy linux user though so my experiences probably aren't up to date. (and all my systems that can run ZFS, Proxmox for the VMs and FreeBSD for opnsense and my server).

    3. K555 Bronze badge

      Re: Anecdotally... No To BTRFS Too

      I lost my Suse install on my work laptop to it. Once it's broken, you become painfully aware of the lack of tools to repair it!

      1. RAMChYLD Silver badge

        Re: Anecdotally... No To BTRFS Too

        I think the issue with BTRFS is clear- the devs put too much trust into their work, that it became too big to fail.

        The BTRFS mantra I've seen time and time again is that "file checking isn't needed, because the filesystem is self-healing". Until it isn't.

        BTRFS is fine for single volume disks (never experienced a single fault in the 15 years I've been using it) but its RAID code is extremely unusable even to this day, and using btrfs RAID is a quick way to ensure you lose your data.

        That said I'm sad to see bcachefs go. The Linux kernel devs are extremely hostile to ZFS even when the version of ZFS the ZoL team are are working with is an old version forked from before Oracle took over to the point of not wanting the kernel in-tree and even working against the ZFS team by changing symbols with every Linux release just because they fear Oracle's lawyers (over what, the version forked from before Oracle closed the source? Pretty sure license changes are not retroactive and Oracle will get laughed of the courtroom by the EFF and FSF. Hell they could even threaten to revoke Oralce's membership from the Linux Foundation).

    4. Doctor Syntax Silver badge

      Re: Anecdotally... No To BTRFS Too

      Speed is good but reliability is essential. The file system's function is to save your data and return it as required. If it fails to return it it doesn't matter how quickly it fails.

      1. K555 Bronze badge

        Re: Anecdotally... No To BTRFS Too

        The only place I now use btrfs is for a Steam library. It's pretty quick and I can make use of compression and de-duplication so I get something like 1.25x savings on space overall. If it gets damaged, I can just have steam verify/fix the files or rebuild it if the worst comes to the worst.

    5. LazLong

      Why no love for XFS?

      The title says it all. I've been using XFS, in various forms, since '99 on servers and users' systems, on Irix and Linux, and never lost any data.

      ZFS is great as well, but not often a true use case for common users.

      1. cosmodrome

        Re: Why no love for XFS?

        Because unlike ext4 it can't shrink existent filesystems? For me, relying massively on LVM, that's a knock-out point. Unfortunately, since I had been running XFS since the days of IRIX, but with linited sparse capacity and the need to replace drives plus the resulting need to resize partitions ext4 clearly beats XFS despite its' reliability and performance.

      2. mirachu Bronze badge

        Re: Why no love for XFS?

        I managed to tickle an XFS bug back in 2004 or so, caused data loss. I was able to find *one* hit of an identical occurrence.

      3. Nate Amsden Silver badge

        Re: Why no love for XFS?

        Curious what is the good use case for XFS for common users ?

        I have deployed XFS on Ubuntu primarily for DB servers(and that was mainly just a request by the DBA, given real world usage on the servers in question I don't think it would matter if they ran ext4 as they are not heavily used), and also for Veeam backups with xfs+reflinks. In all cases the filesystems were hosted on a fiberchannel SAN. The Veeam use case(or something else that leverages reflinks) is the only use case I can think of for XFS that is good use case for a somewhat common workload (obviously super high performance stuff may well benefit from XFS these days still over other options though most users don't drive super high performance anything). Though interacting with reflinks from a linux standpoint is super weird(in my experience basically can't do anything with the data have to let the app do everything in it's special way).

        I have little doubt that I could probably get by fine with XFS in other use cases but the other filesystems I use either ext4, zfs I haven't come across a reason to feel a need to change them to anything else. I even run reiserfs3 on my personal Cyrus IMAP filestore (1.1M files in 42GB) due to it's space efficiency (though at some point will have to migrate that one I suppose if they are dropping it). ZFS for me is mainly used for snapshots and snapshot replication(being a mature system for that, most of my Z file systems are on fibrechannel SAN too). ext4 used for everything else because it works fine.

      4. Spanky_McPherson

        Re: Why no love for XFS?

        Why no love for XFS? Compared to bcachefs:

        Snapshots, checksums, compression, encryption, raid, dedicated cache devices.

    6. MrBanana Silver badge

      Re: Anecdotally... No To BTRFS Too

      I had a use case for BTRFS (many small files), but it has bitten me twice, so never again. For the average user, stability outweighs speed and size.

  2. katrinab Silver badge
    Devil

    I guess this is why I'm a FreeBSD fangirl.

    zfs as a first class citizen, and none of the toxic drama that seems to be present in Linux circles.

    Also, no SystemD.

    1. VoiceOfTruth Silver badge

      ZFS is still way ahead of BTRFS in terms of reliability. It is the benchmark to which other file systems aspire. It is not perfect for everything, but what it does it does extremely well.

    2. kmorwath

      Everytime I give a look to TrueNAS logs (now built on Debian, unluckily), the kernel complains ZFS taints it... I'd like to know how many kernels at Google, Facebook, Amazon, etc. are "tainted" because they do load proprietary code they will never release to competitors...

      1. Benegesserict Cumbersomberbatch Silver badge

        A GPL (rather than CDDL) ZFS would be possible, if and only if OpenZFS (or someone) were to audit and rewrite every line of Sun-engineered code in the codebase. Larry will tap you on the head with a lawyer if you get it wrong. Any takers?

        Taint is just a scary pejorative chosen in this context to make ZFS users think thrice about not swallowing the Free Software dogma as a bolus.

        Once you're fine with that idea, carry on...

        1. Jamie Jones Silver badge

          Yes!

          I'm fed up of this GPL cultist argument to ZFS, and as I've had too many upvotes lately, I'm going to say so!

          The GPL is fine - it's helped produce a huge ecosystem of software

          It also has many issues, in that it will swallow more liberally licensed products, and to use the Linux kernel terminology, actually "taints" *them*

          Yeah, people argue that ZFS purposely chose CDDL to hamper GPL products, and I don't know or care if that's true or not. However, If it is true, it's ironic that GPL folk realise their warped sense of purity has gotten in their way, yet they are still so self-unaware that they don't realise that whilst they have been played, the solution to that is in their court.

          Why rewrite ZFS so that you can apply a less free license to it?

          The reason ZFS isn't allowed in the kernel is a Linux/GPL issue, not a CDDL one.

          1. Jamie Jones Silver badge

            If the license someone chooses for their software isn't compatible with my licence due to restrictions on *my* side, demanding the other people change their license is the height or arrogance.

            It's cultist behaviour, as the downvotes prove!

      2. bazza Silver badge

        The kernel complaining that it is tainted is nuts. It isn't tainted in anyway, according to the terms of the GPL2 license.

        The problems start only if one distributes a Linux kernel binary and ZFS together. And a running kernel is not in the act of being distributed, it's running. And there's absolutely nothing in GPL2 that stops one choosing to run ZFS in Linux in the privacy of one's own server.

    3. ReaperX7

      I think it's high time the kernel developers just squash this petty CDDL vs GPL license issue that isn't an issue, where they forcibly break ZFS every kernel release, and just adopt OpenZFS. Oracle doesn't care obviously because most of their systems are now GNU/Linux, not Solaris, although Solaris is somewhat maintained still. Or, at least be civil with OpenZFS and let them catch up faster by including them in development cycles so OpenZFS isn't left rotting on the LTS kernels while mainline kernels, the zen kernel, and others get fresh code.

      1. VoiceOfTruth Silver badge

        There is parochialism in the Linux kernel world. Not invented here = it's not getting in.

        1. that one in the corner Silver badge

          > There is parochialism in the Linux kernel world. Not invented here = it's not getting in

          Ah, so that explains why the Linux file systems docs says they don't support ADFS (Acorn), Amiga (Commodore), NTFS or any flavour of FAT (really hope I don't have to tell you where those came from); why v9fs (Plan 9 From Bell Labs), BeOS fs or Macintosh HFS never get a look in.

          Nor does Linux support NFS or SMB for use across a LAN, because Linux does not have any TCP/IP or other networks implemented.

          None of which matters, because Linux has never supported CD-ROMs either, so practically nobody has been able to install Linux, one of the major reasons it was effectively stillborn back in the 1990s.

          Oh, hang on, none of the above was even remotely true! Tut tut, whoever would think of lying like that.

          1. m4r35n357 Silver badge

            The five upvoters?

          2. Liam Proven (Written by Reg staff) Silver badge

            > Oh, hang on, none of the above was even remotely true!

            I call BS.

            I don't think it is _the_ important point but it has an element of truth.

            The ones you mention are all lousy examples. There is no Commodore code, say, in the kernel. All those are simple, ancient filesystems which were reverse-engineered, or they were FOSS (e.g. 9p, exFAT) and could be used directly.

            That is not relevant.

            The point here is not support for 1980s stuff. The point is: modern filesystems, which manage partitioning for you, manage arrays and mirroring for you, which have instant COW snapshots so you can just undo config changes, even ones bad enough you computer won't boot successfully, which are fault tolerant and won't get in a mess that means you lose data.

            Btrfs is native Linux code and it fails at this.

            But there are modern FSes from other OSes in Linux. AdvFS from DEC Tru64, XFS from SGI Irix, JFS from IBM AIX. They don't do all of this but they are solid -- and non-native.

            That's why I say your examples are badly chosen.

            But the target here is ZFS, which is still the state of the art. Linux has nothing like it, and unless Oracle relicenses it, which it won't, then that means writing a new one.

            Bcachefs is the best we have.

            But its nerd-king clashes with the nerd-emperor and his nerd acolytes.

            So we are not getting it in the kernel.

            1. that one in the corner Silver badge

              The point was to respond to claim of parochialism - and the reason ZFS is not in the kernel is due solely to licensing issues. Were ZFS to be relicensed as in a fashion friendly to the rest of the kernel, it would be pulled in in a heartbeat.

              Unless you have just decided that a clash of licences is merely parochialism.

              As for whether my examples were relevant: it is utterly irrelevant whether they are all from the 1980s, 1970s or even 1960s: the claim I responded to was that Linux simply refuses anything NIH. None of those example were invented by or for the Linux kernel team - and every one of them was happily adopted by that team. Hence, no parochialism.

              I am happy to have the honour of your response to any of my comments, but please do yourself the honour of responding what was actually stated - you let yourself down.

              1. Jamie Jones Silver badge

                Were ZFS to be relicensed as in a fashion friendly to the rest of the kernel, it would be pulled in in a heartbeat.

                That's not ZFS cddl problem, it's a GPL problem.

                As for whether my examples were relevant: it is utterly irrelevant whether they are all from the 1980s, 1970s or even 1960s: the claim I responded to was that Linux simply refuses anything NIH. None of those example were invented by or for the Linux kernel team - and every one of them was happily adopted by that team. Hence, no parochialism.

                Huh? Liam clarified how your examples were irrelevant - all the things you mention are not in the kernel directly, they are recreations and emulations written specifically for Linux.

                Not once did Liam imply that a compatible and GPL written implementation of ZFS would be denied from the kernel, which is what you're implying!

                Let's go down your list:

                • ADFS (Acorn) - The Acorn Disc Filing System (ADFS) Linux support was implemented by Russell King. He authored the Linux filesystem implementation. Specifically, the Linux kernel 2.1.x and later versions include support for ADFS.
                • Amiga (Commodore) - Several contributors have worked on FS-UAE, an Amiga emulator for Linux and other platforms. For example, AndrewKanaber altered keycode mapping on Linux to work with different Xorg drivers. Tobias Netzel added support for PPC/Older Mac OS (10.5).
                • NTFS or any flavour of FAT (really hope I don't have to tell you where those came from);

                  Several contributors have worked on FS-UAE, an Amiga emulator for Linux and other platforms. For example, AndrewKanaber altered keycode mapping on Linux to work with different Xorg drivers. Tobias Netzel added support for PPC/Older Mac OS (10.5).

                  Besides the kernel driver, NTFS-3G is another popular open-source solution for NTFS support in Linux, developed by Szabolcs Szakacsits. It is a FUSE-based implementation, allowing it to work on various operating systems.

                  None of this was from "where you think they came from"

                • TCP/IP - The initial implementation of the TCP/IP stack in the Linux kernel was a collaborative effort, rather than the work of a single author.

                  Ross Biro is credited with writing the original kernel-based networking code for Linux.

                  His initial routines, though incomplete, laid the foundation for Linux's networking capabilities.

                  Other developers like Donald Becker and Laurence Culhane contributed crucial components like the Ethernet drivers and SLIP driver.

                  The development of the Linux networking code progressed through various stages:

                  Fred van Kempen took over Ross Biro's code and released NET-2, which underwent several revisions.

                  Alan Cox then debugged Fred's code, leading to the stable NET-2D(ebugged) release, which was incorporated into the standard kernel before Linux 1.0.

                  Later versions, NET-3 and NET-4, were released with Linux versions 1.2.x and 2.x respectively.

                  The Linux kernel and its networking components continue to be developed and maintained by a large and active community of developers.

                etc.etc.etc.

      2. Jon 37

        This "petty" CDDL vs GPL license issue was because Sun invented CDDL with the deliberate intention of being incompatible with the GPL. They wanted to open-source their code in a GPL-like way because that was trendy and good PR. But they did not want to actually use the GPL, and/or they wanted to make it so Linux was unable to use their code.

        You can't mix CDDL and GPL code in the same program, and legally distribute it.

        Some people are happy to gamble that they can get away with it. You might be happy to make that bet yourself. But there are plenty of people who are not willing to take that risk.

        (See: The SCO vs The World lawsuits, where the fact that Linux scrupulously follows all licenses meant that SCO had no case).

        There are also people who consider breaking license agreements like that is morally wrong. (According to their personal morals, everyone has different morals and that's okay).

        1. bazza Silver badge

          >This "petty" CDDL vs GPL license issue was because Sun invented CDDL with the deliberate intention of being incompatible with the GPL.

          Whether it was deliberate or not is disputed. I doubt it was so. Sun (when not owned by Oracle) were a pretty open company, gave away a lot of things like ZFS, D-Trace, Java, even the designs of their CPUs so you could go make your own (and countries / companies did), and I'm pretty sure they had to donate NFS to avoid being blitzed by the US gov in the early days.

          Also, remember that when ZFS was OSS'ed by Sun in 2006, Linux was no where near as dominant in the IT world as it is today. One should not map Linux's dominance today back into an analysis of Sun's motivations then.

          1. m4r35n357 Silver badge

            It really WAS deliberate.

            1. Anonymous Coward
              Anonymous Coward

              Not as deliberate as the Licensing Bottom Inspectors @ Oracle.

        2. ptribble

          The idea that CDDL was chosen to be deliberately incompatible with GPL is simply untrue.

          Sun had to persuade a long list of existing licence holders and intellectual property owners to agree in order to open-source Solaris. CDDL was the compromise that resulted. (It also had to be a per-file license, rather than a per-project license, as lots of files in Solaris/OpenSolaris/illumos have different licenses.)

          We wouldn't have expected Linux to simply take the code for, say, ZFS, and simply recompile it. A cleanroom implementation of the format seemed more likely, and it's been a bit of a surprise that ZFSonLinux decided to import the existing source and create a porting layer.

      3. This post has been deleted by its author

      4. bazza Silver badge

        >I think it's high time the kernel developers just squash this petty CDDL vs GPL license issue that isn't an issue

        If only it were that simple. They can't change the license on Linux, because GPL2 won't let them.

        The problem is that to change the license on the kernel in anyway, all the copyright holders that own bits of the Linux kernel would have to agree. That's nigh on impossible; some are dead, and their estates hold the copyright (probably unknowingly), some haven't been heard from for years and can't be contacted. If the core active community decided "oh well, we tried out best" and got on with it without full and comprehensive agreement from all the copyright holders, all it would take is for just one of them to creep out of the woodwork, lodge a complaint in court and the kernel is screwed. That might be unlikely, but it's not impossible. All it takes is for one estate beneficiary to realise what they own and also be a lawyer, and "boom".

        Basically, the legal mechanisms that stop commercial concerns walking off with the kernel source code and making it entirely proprietary are the same mechanisms that prevent the source code being re-licensed. When you think about it, re-licensing on a different OSS license (e.g. one that might reach an understanding with CDDL) is the same kind of act as making it closed source proprietary.

        The kernel community can't change the license on ZFS because they don't own it.

        Oracle could solve this by releasing ZFS under GPL2. However, they'd be mad to do so. They get some commercial benefit from owning ZFS - RedHat daren't touch it and haven't got anything to match it.

        Ubuntu's approach is fine. If it's the end user who is downloading a kernel module and doing the "linking", there's no license problem.

      5. m4r35n357 Silver badge

        "Oracle doesn't care" - go on then, I dare you!

    4. sedregj
      Windows

      ZFS is marvelous.

      I use it on pfSense (FreeBSD) and Proxmox (Linux). Anecdotally, it beats the crap out of UFS for pfSense. I have a cluster of Dell servers and many ACPU based boxes and many Netgate hardware jobbies to worry about. Since ZFS became the default, I have not lost a filesystem, on mostly single disc setups. Those should have an equal chance of data loss when power fails but ZFS seems to fail safer.

      Lots of lovely Proxmox (ex VMware) boxes. The ones without Ceph have ZFS, including my home systems. Lovely.

      What isn't lovely is swapping out a failed disc in a zpool. Many of these systems have RAID controllers that I have converted to non-RAID so that ZFS can do its stuff properly. In RAID mode the controllers will do their periodic patrol reads and cast out discs that have been deemed to have failed. An amber light comes on and you swap it out - job done.

      Its not exactly the end of the world doing the ZFS equivalent but working out which device is which can be a right old laugh. I can't help but feel that ZFS could notice a swap out of a similar disc as a failed one on the same bus/slot and ask if it should be used when you next run zpool. You would be prompted by an email from zed.

      1. aaronmdjones

        > Its not exactly the end of the world doing the ZFS equivalent but working out which device is which can be a right old laugh

        I give human-readable names to each of the drives in my systems. You can use device-mapper for this or just a GUID partition table with one partition covering the whole drive -- GPT partitions can be given names independent of any filesystem (if any) that may be in them. In both of my NASes the names are the drive slot number. For example, one of my zpools has 12 drives in it of the form /dev/disk/by-id/dm-name-SLOT01 through /dev/disk/by-id/dm-name-SLOT12 -- thus, identifying which drive has failed is trivial, as it is simply the drive in the correspondingly numbered slot.

        The tricky part is working out which drive is which /when setting all of this up/, which can be accomplished by using smartctl to query each drive for its serial number, or just plugging in one drive at a time and watching dmesg to learn the drive identifier the kernel gave it (e.g. sdo) and then setting up /dev/sdo to be named SLOT15 (or whatever slot you plugged it into).

    5. Anonymous Coward
      Anonymous Coward

      ZFS on BSD was rebased on ZoL so long ago that it is behind on features. There are still a handful of niceties, but in general the license controversy is the only reason not to use it on Linux instead.

      1. Jamie Jones Silver badge

        Huh? BSD ZFS and ZoL combining to produce OpenZFS is what keeps them *IN* parity. A lot of the development there is from previous BSD-ZFS users, and both projects now benefit from being the combined project.

        Anyway, seeing as you're basically saying that the only reason to run FreeBSD instead of Linux is ZFS shows you are either trolling, or don't know what you're talking about.

    6. Sudosu Silver badge

      I use OmniOS (Solaris based) for my ZFS NAS

      OmniOS is always hard to install for me, and I often have to install it from a DVD, but once its running it is a rock solid minimal OS.

      In the 10ish years of running it I have never had a software crash and only had issues with one Z-array made from the %#$& 3TB Seagates of which 5 of 8 failed consistently, and kept failing...and got replaced.

      Maybe I should give OpenIndiana a shot as a desktop OS for a while.

  3. ecarlseen

    Add me to the list of people who have lost data to BTRFS

    Allegedly it is some degree of better now (maybe? I'm not playing guinea pig again), but it was rushed into production use way too quickly which certainly brings into question the judgment of the people deciding which filesystems to include in the kernel.

    1. may_i Silver badge

      Re: Add me to the list of people who have lost data to BTRFS

      I guess it depends on what you want to do with it. I've been running btrfs on Debian for at least the last two releases without any problems. However, I only use a single SSD and I don't use btrfs for RAID, subvolumes or anything advanced. I use it purely for snapshots so that I can ensure the backups taken by my URBackup server are consistent.

      1. ecarlseen

        Re: Add me to the list of people who have lost data to BTRFS

        Why do people like you even write these comments? What does it bring to the discussion?

        I didn't say it happens to everyone. Nobody is claiming it. Clearly it works well for many people. If it works for you, that's great and I'm happy for you. If it works at Meta, who has (comparatively) infinity dollars for systems validation before putting hardware and software into production, that's great and I'm happy for them. However, it has also failed for many people - apparently mostly in multiple-disk configurations where significantly more data is at risk.

        1. Anonymous Coward
          Anonymous Coward

          Re: Add me to the list of people who have lost data to BTRFS

          Why does the OP write the comment they did? "Well sure, there have been problem reports, no duh"

          Together, they can help establish a general "safe" area of usage. Works fine for single disks, with snapshots (that surprised me), but not RAID (raid5 still has a random data-loss during normal usage, as of.. 2020?), and not "advanced usage".

          I took an image of an ext4 volume with btrfs convert, and two months later I had to back up all data and restore from backup (with some loss) - initially the image had some unaligned pieces and I found later the "initial" image had some out-of-bounds reads when doing btrfs check (which you're never supposed to have to do), eventually leading to data loss.

          Feel free to contribute your own experience. Does it work? does it not? Are you just going to say "yes" or "no" without any detail?

  4. may_i Silver badge

    An unfortunate turn of events

    I've seen that Kent has a quite abrasive style and forceful personality. It can be difficult to get along with people who are like that. Sometimes though, you have to.

    It's sad to see something which would undeniably be a positive option excluded from the kernel for seemingly the wrong reasons. Such is the nature of groups.

    1. Paul Crawford Silver badge

      Re: An unfortunate turn of events

      The problem is not just for this one instance of the code being submitted and normal procedures/etiquette not being followed, it is the future dependency for many user's data integrity on a major maintainer who has issues in cooperating with the majority.

      Let us not mention ReiserFS in this discussion...

      1. ecarlseen

        Re: An unfortunate turn of events

        ReiserFS?

        "After all, a murder is only an extroverted suicide."

      2. Jamie Jones Silver badge

        Re: An unfortunate turn of events

        But isn't this the same as the "run over by a bus" worry?

        Whether a flakey maintainer goes rogue, or gets run over by a bus, the end result is the same - it's not like he would be given unrevocable keys to the kingdom.

        Therefore, shouldn't a projects viability be judged on the overall community development, and willingness to support a project, not one person?

    2. Anonymous Coward
      Anonymous Coward

      Re: An unfortunate turn of events

      He's not merely abrasive. He broke things in stable repeatedly and claimed that should be fine, rather than doing what a sane kernel developer would.

      1. Spanky_McPherson

        Re: An unfortunate turn of events

        As far as I'm aware this is untrue. Can you back this up? (Not the abrasive part, that goes without saying).

        I have followed this saga from the start and I don't recall Overstreet breaking anything in "stable" (whatever that means). Sure, bcachefs itself has been broken but nobody claims that to be stable yet.

        If this had actually happened I can only imagine the kinds of things that would have been posted on the mailing lists.

        1. habilain

          Re: An unfortunate turn of events

          It certainly has happened - multiple times, if memory serves, but here's Roeck having to clean up Overstreet's code because Overstreet didn't bother to even build for big-endian architectures, let alone test: https://lkml.org/lkml/2024/9/29/520. I remember there was another time where Overstreet demanded another tree be patched with his code outside of that tree's rules, but my Google-fu isn't enough to find it.

          And the comments about this being a personality clash are missing the point. Overstreet has repeatedly failed to follow the kernel development rules. He submits code without adequate testing (https://www.phoronix.com/news/Bcachefs-Fixes-Two-Choices), and repeatedly submits features outside of the merge window (https://www.phoronix.com/news/Linux-616-Bcachefs-Late-Feature). Overstreet then demanded that Linus cannot review/question his patches (https://www.phoronix.com/news/Bcachefs-One-Week-Later-Merge), and that is the core of the current conflict - Overstreet demanding special treatment, in a way that could literally be interpreted as the keys to the kingdom. The personality stuff comes up because Overstreet tries to make it about personality and paint himself as unfairly victimized, but he is simply not following the rules everyone else follows, despite being repeatedly told to. That's the source of friction.

          All of the harsh words being said on the Kernel mailing list now? People have finally had enough.

          1. Spanky_McPherson

            Re: An unfortunate turn of events

            Well the one example you think you found (https://lkml.org/lkml/2024/9/29/520) - that was a bug in bcachefs itself - certainly not anything broken in "stable"

            1. habilain

              Re: An unfortunate turn of events

              I sent the link with the most info on that particular bug, but Overstreet submitted that to the kernel proper and caused build failures in one of the RC1 builds - can't remember the kernel version though. (https://lore.kernel.org/all/202409272048.MZvBm569-lkp@intel.com/, https://lore.kernel.org/all/202409271712.EZRpO2Z1-lkp@intel.com/). You can argue that this isn't "stable", but I'd argue that's not how Linux development works, given how much testing they do to avoid putting out things which are unstable. And if nothing else, checking that the code builds is *basic* stuff.

              Incidentally, the other thing I mentioned about Overstreet subverting other peoples trees was written up here: https://www.phoronix.com/news/Linux-6.5-Bcachefs-Unlikely.

              The problem still remains: Overstreet demands special treatment and to be exempt from the normal rules of Linux Kernel development. And the other Kernel devs are saying "no, play by the rules or don't play at all". Considering that Overstreet agreed to those very same rules when he started contributing, it's really on him.

      2. RainingCatFivesAndDogs

        Re: An unfortunate turn of events

        Yeah, this isn’t the best reporting. The article frames a technical result as the byproduct of tangentially related interactions. The email thread where Linus told Overstreet to pound sand was a different exchange. That also touched on Kent’s abrasive interpersonal style, but also pointed out his attempt at merging a bunch of large changes on his own schedule and more or less ignoring (again) the process and time demands from other maintainers and Torvalds. Some of those code changes touched code outside BCacheFS.

  5. My other car WAS an IAV Stryker
    Pint

    There are certain things you don't say about or to others

    Making any kind of claims of mental illness -- especially in anger or defensively -- is one of those things.

    You don't really know them through just an exchange of emails. You don't know what's going on in their head, their personal life, et cetera. Claiming they are, essentially, sick -- even abnormal -- crosses a line.

    I am not a doctor but we all have some experience with ourselves or others. I know I have some low-level anxiety/anger issues and have been through bouts of depression. It takes a gentle hand from a close trusted source to help someone into and along a path to mental wellness. Angry accusations aren't going to help that and might trigger even worse mental state and behavior.

    Happy weekend, everyone. This ---> for those who choose to (and aren't dealing with addictions and the like; apologies to those who are).

    1. Cloudseer

      Re: There are certain things you don't say about or to others

      Welll said.

  6. Anonymous Coward
    Anonymous Coward

    Linux: Life without a marketing department.(NT}

    t.

    1. m4r35n357 Silver badge

      Re: Linux: Life without a marketing department.(NT}

      Yeah why can't we have a bunch of paid lying twats making overblown claims like everyone else?

  7. Ikkabar

    Not due to a clash of personalities...

    The issue is that Kent Overstreet is seemingly unwilling to follow the standard rules for the kernel development. New features can be added during the merge window, but not during the release candidate stages that follow.

    If there was a need to provide a fix between the merge windows for a user having serious issues, that could easily have been provided to the user as a patchset, and then included in the next merge window.

    1. OhForF' Silver badge

      Re: Not due to a clash of personalities...

      Linux seems to have announced the merge window for 6.17 late in July and Kent Overstreet's pull request as linked in the article is dated Mon, 28 Jul 2025 11:14:33 -0400.

      This article is not about Palmer Dabbelt and his late RISC-V patches (the article has a link to the el reg coverage of that as well) but talks about a separate issue.

      1. Liam3851

        Re: Not due to a clash of personalities...

        Kent was already banned during the 6.16 window for repeatedly submitting large patches well into the RC phase, and then abusing Linus trying to get them in for 6.16. Focusing on the 6.17 window is dumb because he was already booted in 6.16 because of his misbehavior over a long period. Kent just then acted in 6.17 as if nothing happened.

    2. zimzam Silver badge

      Re: Not due to a clash of personalities...

      Exactly. He wants to issue important features faster than the kernel development process allows, so it's a good thing for it to step out of tree for a while until its development stabilises and he doesn't have to rush out features *after* the last minute.

    3. MMM4

      Re: Not due to a clash of personalities...

      Yes, the article is very misleading. There are personality clashes and for sure they have not helped. But the reason why bcachefs is at risk is just that Kent cannot stop submitting large code changes during the stabilization phase, he keeps violating this rule.

  8. VoiceOfTruth Silver badge

    Due to personality clashes

    >> It's a significant technological loss

    A proven to be unreliable file system is available to slurp all your data. It won't go wrong, until it does. And then you should have had backups.

  9. Spanky_McPherson

    Justice for bcachefs!

    This is such a shame. Bcachefs is the most important filesystem development in linux in the last 2 decades.

    Some of the abuse directed to Overstreet on public mailing lists has been shocking. The kernel developers responsible should be ashamed.

    (FWIW, I also lost a btrfs filesystem which imploded, not after a power loss, but after running out of disk space.)

    1. ecarlseen

      Re: Justice for bcachefs!

      The irony is that if the BTRFS developers were as good at writing code as they are at holding grudges, none of this discussion would be happening.

      I'm not a LKML geek, but reading this thread suggests to me that there's a lot of ivory-tower mentality in FS-land ("it works well in theory and in my lab, if it doesn't work for the rest of the world then it's the rest of the world that's wrong") and that does not bode well for the future of the operating system.

      1. Doctor Syntax Silver badge

        Re: Justice for bcachefs!

        The future of Linux does not depend on a non-default FS.

    2. Philo T Farnsworth Silver badge

      Re: Justice for bcachefs!

      Not being a filesystem expert and being just a person who wishes to compute, a lot (okay, almost all) of this discussion is lost on me.

      Anyone want to educate me on what bcachefs brings to the party that, say, ext4 doesn't?

      I've been running ext4 fileystems on my machines for ages and never had the slightest burp (. . . checks backups in fear of incurring the the wrath of the deities of overconfidence. . .) or failure on a system that's been running 24/7 for at least four years.

      1. Anonymous Coward
        Anonymous Coward

        Re: Justice for bcachefs!

        I'm there with you. I understood from the article that they're both filesystems, and bcachefs is substantially less stable than btrfs (which, for a filesystem, is an excellent reason to nope right out of town), but why use one of these instead of another filesystem? Saying no to NTFS I understand (being proprietary is one good reason), but why not ext4, XFS, ZFS...?

        (I really am honestly asking, knowing very little about the different filesystems.)

        1. Doctor Syntax Silver badge

          Re: Justice for bcachefs!

          Ext4 is far from excluded, it's most people's default. ZFS has been explained above - the reasons aren't technical, they're legal. XFS? dunno.

          1. Jamie Jones Silver badge

            Re: Justice for bcachefs!

            Correct me if I'm wrong, but from what I understand, ZFS on Linux doesn't run as well as it could due to not being integrated and thus not being able to use the most efficient memory and disk caching it's otherwise capable of using.

            1. Jamie Jones Silver badge
              FAIL

              Re: Justice for bcachefs!

              Dear incel downvoters, it makes me laugh that you are so butthurt as to download a simple question, even if I was wrong, but that aside, Liam has basically confirmed what I thought:

              From: https://forums.theregister.com/forum/all/2025/08/15/sad_end_of_bcachefs/#c_5126840

              But it's not GPL so it can't be built into the Linux kernel.

              You can load it as a module and that's fine but its cache remains separate from the Linux cache, so it uses twice the memory, maybe more.

              Still, carry on the downvotes, it's great to know I'm getting under the skin of MAGA morons!

      2. brainwrong

        Re: Justice for bcachefs!

        The BcacheFS feature I'm looking for is raid5/6 with built-in checksumming / correction on error, with the ability to add/remove disks from the filesystem.

        MDADM on top of DM-integrity (with any FS on top) can do this, but it's a bit complex and using journaling for resiliency in case of power loss leads to a large write-amplification on SSD's, or slow performance on HDD's.

        BTRFS can do this, but if a disk fails then it's not guaranteed to work, and it's not resilient against power loss while writing. BTRFS was designed before they understood raid5/6, it's supposed to be a copy-on write filesystem but their implementation of raid5/6 breaks this.

        ZFS is inflexible, it's designed for server use, I'm a home user with an above average amount of data. I can't just create a bigger filesystem on new disks and copy all the data over when I need to expand.

        BcacheFS can do copy-on-write and raid5/6 at the same time. However, I don't think much of the erasure coding functionality has been implemented yet. But it's achitecture looks much more sensible than btrfs.

        1. VoiceOfTruth Silver badge

          Re: Justice for bcachefs!

          >> ZFS is inflexible, it's designed for server use, I'm a home user with an above average amount of data. I can't just create a bigger filesystem on new disks and copy all the data over when I need to expand.

          Er, you can do exactly that with ZFS. Perhaps you don't know what you were trying to do.

          1. brainwrong

            Re: Justice for bcachefs!

            Not properly, it doesn't re-stripe the existing data like mdadm or btrfs, it just evens out the disk usage.

            A 3 disk raid5 expanded to 5 will inherit the same 50% parity overhead for existing data, new data written will have 25% overhead. If it was full before expanding, then you only gain the capacity of 1.6 disks.

            It cannot shrink. Less likely to need this, but I might.

            1. eldakka

              Re: Justice for bcachefs!

              Not properly, it doesn't re-stripe the existing data like mdadm or btrfs, it just evens out the disk usage.

              A 3 disk raid5 expanded to 5 will inherit the same 50% parity overhead for existing data,

              And that can be solved by a simple mv and copy back the file. e.g.

              mv $i $i.tmp && cp -p $i.tmp $i && rm $i.tmp

              Stick that (or your own preference, using rsync for example) in a simple script/find command to recurse it (with appropriate checks/tests etc.), and that'll make the 'old' data stripe 'properly' across the full RAID width.

              1. that one in the corner Silver badge

                Re: Justice for bcachefs!

                As a person with ZFS, I'd like to say:

                Reading

                >> Not properly, it doesn't re-stripe the existing data

                and replying with

                > And that can be solved by a simple ... e.g ... mv $i $i.tmp && cp -p $i.tmp $i && rm $i.tmp ... Stick that (or your own preference, using rsync for example) in a simple script/find command to recurse it (with appropriate checks/tests etc.)

                is as much a "simple solution" and so divorced from the behaviour we'd get if ZFS did the re-striping itself* that you may as well say we don't need ZFS to do snapshots for us, we could write our own simple script to, ooh, create a new overlay/passthrough file system, change all the mount points, halt all processes with writable file handles open... (yes, yes, I'm being hyperbolic).

                * e.g. 'beneath' the user file access level with no possibility of access control issues, not risking problems when changing your simplistic commands into production-ready "appropriate check/tests etc" like status reports, running automatically, maybe even backing off when there is a momentary load increase so the whole server isn't bogged down as the recursive cp chews the terabytes, not risking losing track when your telnet into the server shell dies (not risking a brainfart and doing all that copying over the LAN and back again!) - and simply being accessible to Joe Bloggs ZFS user who just would like it all to work, please.

                1. eldakka

                  Re: Justice for bcachefs!

                  > is as much a "simple solution" and so divorced from the behaviour we'd get if ZFS did the re-striping itself* that you may as well say we don't need ZFS to do snapshots for us, we could write our own simple script to, ooh, create a new overlay/passthrough file system, change all the mount points, halt all processes with writable file handles open... (yes, yes, I'm being hyperbolic).

                  I never said it shouldn't be something ZFS does transparently. I never said it would be a bad idea or unnecessary thing for ZFS to support.

                  I was merely pointing out that it is a fairly simple thing to work around such that maybe the unpaid ZFS devs feel they have more important things to work on for now. I mean, it's taken the best part of 20 years to even get the ability to expand a RAIDZ vdev at all.

                  I'll also say that if anyone actually cares about the filesystem they are using, making conscious decisions to choose a filesystem like ZFS or whatever, then they are not a typical average user. Typical average users don't create ZFS arrays of multiple disks in various raidz/mirror volumes and then grow them. That is not the use-case of an average user.

                  Later (below) you say "production-ready", why are you messing around with growing raidz vdevs and wanting to re-stripe them to distribute across the array? That is a hobbyist/homelab-type situation. If you are using ZFS in a production environment - that is revenue/income is tied to it - then the answer is to create a new raidz and migrate (zfs-send/receive) data to it. No messing about with growing raidz vdevs and re-striping the data, that's just totally unnecessary.

                  > e.g. 'beneath' the user file access level with no possibility of access control issues,

                  If you run the mv and cp as root, then there will be no access control issues, cp -p (as root) will preserve file permissions and FACLs.

                  > not risking problems when changing your simplistic commands into production-ready "appropriate check/tests etc" like status reports, running automatically, maybe even backing off when there is a momentary load increase so the whole server isn't bogged down as the recursive cp

                  If you system gets bogged down from doing a single file copy, then I think you have a system problem.

                  > chews the terabytes,

                  Why would it chew terabytes? Unless you have TB-sized files, it won't. Recursive doesn't mean what I think you think it means. It does not mean "in parallel". The example I gave will work on a single file at a time in a serial process, and will not move onto the next file until the current file is complete (tehniically it won't move on at all, it's the inner part of a loop you'd need to feed a file list to it). Therefore no extra space beyond the size of the currently being worked on file is needed.

                  > not risking losing track when your telnet into the server shell dies

                  Why would that do anything? At worst you'll have a single $i.tmp file that you might have to manually do the cp back to the original ($i) name. There will be no data loss (and especially not if you snapshot it first). And even if you 'lose track', just start again, no biggie, will just take longer as you're redoing some of the already done work.

                  And as I said, you can use things like rsync instead, which would give you the ability to 'keep track' instead. The command I pasted was just the simplest one to give an idea of what is needed, just making a new copy of the file will re-stripe it across the full raidz. Or if you have your pool split up into many smaller filesystems rather than just a single one for the entire pool, then you can zfs-send/receive the filesystem to a enw filesystem in the same pool then use "zfs set mountpoint=<oldmountpoint>) to give the new filesystem the same mountpoint as the old one, then delete the old one.

                  > (not risking a brainfart and doing all that copying over the LAN and back again!) - and simply being accessible to Joe Bloggs ZFS user who just would like it all to work, please.

                  I agree, it would be. But it doesn't. I'm pointing out that there is a solution to the issue the poster I am replying to mentioned. It is annoying to have to do (I've done it when I changed the recordsize of my filesystems), but it can be done, and it's not particularly difficult.

                  If someone is going to choose something like ZFS, I'd expect them to be able to do internet searches on topics like this and get help from technical forums or various guides that people have written to cover this sort of use-case. There are guides and instructions on how to do this sort of thing.

                  1. Jamie Jones Silver badge

                    Re: Justice for bcachefs!

                    > If you run the mv and cp as root, then there will be no access control issues, cp -p (as root) will preserve file permissions and FACLs.

                    But not extended attributes! You can use sysutils/clone for that - but that currently doesn't deal with sparse files!

                  2. Orv

                    Re: Justice for bcachefs!

                    Not to mention that if you're using telnet to get a remote shell you're really behind the times.

                    But any long-running command should either be run in tmux, or backgrounded with nohup.

                    And people not experienced enough to know that stuff are probably not experienced enough to be growing/shrinking filesystems anyway. It's super easy to shoot yourself in the foot doing that, even with something simple like ext4.

            2. kmorwath

              Not properly, it doesn't re-stripe the existing data

              Now it does:

              https://freebsdfoundation.org/blog/openzfs-raid-z-expansion-a-new-era-in-storage-flexibility/

              1. brainwrong

                Re: Not properly, it doesn't re-stripe the existing data

                I can't find anywhere on the page that says it re-stripes the data. It's deliberately cagey on this issue. The only relevant sentence I can find is "The process works by redistributing existing data across the new disk configuration, creating a contiguous block of free space at the end of the logical RAID-Z group."

                An example is given where a 4 disk raid5 (file backed disks, 10G size) is expanded to 5 disks. 4.77G of data is written to the 4 disk filesystem, consuming 6.38G, leaving 33.1G of 39.5G free. After adding a the fifth disk, usage is now 6.38G with 43.1G free. 4.77 *4/3 = 6.36G, that's consistent with 6.38G reported usage for 4 disk raid5. 4.77 *5/4 = 5.96G, which is not consistent with 5 disk raid5 usage reported by the ZFS tools. It has *not* re-striped any data in the given example.

          2. Benegesserict Cumbersomberbatch Silver badge

            Re: Justice for bcachefs!

            ZFS is extremelyflexible: checksumming, compression, encryption, mirroring, duplication, deduplication, snapshots, extended attributes, case-sensitivity, cache optimisation, error detection, error correction, RAID... I'm struggling to find a single feature of modern filesystem function that you can't find in ZFS, that you could omit from your chosen ZFS if you like, by simply choosing not to use it.

            Wrapping one's head around the implementation is a learning curve, but that's the cost of using its abilities. Unlike btrfs, in ZFS it's logical (or, at least, there is a logic to it). Oh, and it's stable and it works, too.

            1. Liam Proven (Written by Reg staff) Silver badge

              Re: Justice for bcachefs!

              > I'm struggling to find a single feature of modern filesystem function that you can't find in ZFS

              * Add another parity volume to an existing array.

              * Change RAID levels on the fly, or even offline.

              * Shrink a volume in place.

              It's not perfect.

      3. Spanky_McPherson

        Re: Justice for bcachefs!

        Well, checksums are one useful feature. How do you know that your ext4 filesystem isn't corrupted (due to faulty disks, cables, cosmic rays, whatever?). Short answer - you have no idea whether the data you read back is the data you wrote.

        Zfs, btrfs, bcachefs will guarantee to give you back the same data - or tell you that you need to restore from backup. If you put 2 disks in your machine, these filesystems can not only detect corruption, but fix the bad copy. Of course, zfs has the wrong license, and btrfs will eat your data (facebook engineers apparently disagree with this).

        What if you have a fast SSD and a slow HDD? Bcachefs will direct all your writes to the fast disk, and copy to the slow disk in the background. What if you need more space? Put a new disk in and run a command to expand the filesystem.

        You can take a snapshot of the filesystem at any point, keeping a history of changes. For example, you might take a snapshot before a system update, that can be rolled back instantly if there are any problems.

        Ext4 still wins in performance and stability - for now. But giving up a bit of performance for checksums, snapshots, etc is worth it for me at least.

        1. VoiceOfTruth Silver badge

          Re: Justice for bcachefs!

          Add to that compression, encryption, instantaneous snapshots, extremely easy replication with ZFS, ...

          ZFS is miles ahead of ext4 for many things.

      4. Liam Proven (Written by Reg staff) Silver badge

        Re: Justice for bcachefs!

        > Anyone want to educate me on what bcachefs brings to the party that, say, ext4 doesn't?

        I have gone into this at some length before. For instance, here:

        https://www.theregister.com/2022/03/18/bcachefs/

        ... which is linked from the article you are commenting upon.

        ext2/3/4 only handle one partition on one disk at a time.

        As well as this, first, for partitioning, you need another tool, such as MBR or GPT. But you can do without, in some situations.

        For RAID, you need another tool, e.g. kernel mdraid.

        (Example of the intersection of 1 & 2: it is normal to make a new device with mdraid and then format that new device directly with ext4, _not_ partitioning it first.)

        Want resizable volumes, which might span multiple disks? You need _another_ tool, LVM2.

        But don't try to manage mdraid volumes with LVM2, or LVM2 with mdraid. Doesn't work.

        Want encryption? You need another tool, such as LUKS. There are several.

        Watch out if you use hardware RAID or hardware encryption. The existing tools won't see it or handle it.

        It is complicated. There is lots of room for error.

        So, ZFS fixed that. It does the partitioning part, and the RAID part, and the encryption part, and the resizing part, and also the mounting part, all in one.

        It's great, it's easier and it's faster and you can nominate a fast disk to act as a cache for a bigger array of slower disks...

        And it can take snapshots. While it is running. Take an image of your whole OS in a millisecond and then keep running and all the changes go somewhere new. So you can do an entire distribution upgrade, realise one critical tool doesn't work on the new version, and undo the entire thing, and go back to where you were...

        While keeping all your data and all your files intact.

        All while the OS is running.

        And it does it all in one tool.

        But it's not GPL so it can't be built into the Linux kernel.

        You can load it as a module and that's fine but its cache remains separate from the Linux cache, so it uses twice the memory, maybe more.

        So, there are other GPL tools that replicate some of this.

        Btrfs does some of it. But Btrfs overlaps with, and does not interoperate with, LVM and with mdraid and with LUKS... and it collapses if the disk fills up... and it's easy to fill up because its "how much free space do I have?" command is broken and lies... and when it corrupts, you can't fix it.

        It is, in short, crap, but you can't say that because it is _rude_ and so being the way of Linux it has passionate defenders who complain they are being attacked if you mention problems.

        Bcachefs is an attempt to fix this with an all-GPL tool, designed for Linux, which does all the nice stuff ZFS does but integrates better with the Linux kernel. It does not just replace ext4, it will let you replace ext4 _and_ LVM2 _and_ LUKS _and_ mdraid, all in one tool.

        It will do everything Btrfs does but _not_ collapse in a heap if the volume fills up. And if it does have problems, you can fix it.

        All this is good. All this is needed. We know it's doable because it already exists in a tool from Solaris in a form that FreeBSD can use but Linux can't.

        But in a mean-spirited and unfair summary, Kent Overstreet is young and smart and cocky and wants to deliver something better for Linux and Linux users, and the old guard hate that and they hate him. They hate that this smart punk kid has shown up the problems with their tools they've been working on for 20-30 years.

        1. VoiceOfTruth Silver badge

          Re: Justice for bcachefs!

          Due to your logical and factual explanation of why BTRFS is lacking, prepare to receive thumbs down.

          >> Kent Overstreet is young and smart and cocky and wants to deliver something better for Linux and Linux users, and the old guard hate that and they hate him. They hate that this smart punk kid has shown up the problems with their tools they've been working on for 20-30 years.

          I think his manners could improve (that goes for some of the people who criticise him too). But, yeah, he's upsetting some people by providing solutions they old guard have not fixed or not even acknowledged. Every time I read that BTRFS is great and ready for prime time, I rub my chin knowing this is true only when it works properly. If not, you are screwed.

        2. kmorwath

          Re: Justice for bcachefs!

          You forgot COW - Copy On Write (which underpins snapshots) which is the one big difference with file systems like ext4 or NTFS. And the hash check for data corruption.

          1. Liam Proven (Written by Reg staff) Silver badge

            Re: Justice for bcachefs!

            > You forgot COW

            No I did not. I specifically said:

            "And it can take snapshots. While it is running."

      5. MMM4

        New vs old generations

        This is the nicest and shortest description of how "new generation" filesystems differ from older ones. It's focused on ZFS but not really specific to it. Bookmark that page because I found it un-googlable for some unknown reason.

        https://illumos.org/books/zfs-admin/zfsover-1.html#zfsover-2

        > ZFS eliminates the volume management altogether. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool....

        > ...

        > ZFS is a transactional file system, which means that the file system state is always consistent on disk. Traditional file systems overwrite data in place, which means that if the machine loses power, for example, between the time a data block is allocated and when it is linked into a directory, the file system will be left in an inconsistent state....

        > ...

        > With a transactional file system, data is managed using copy on write semantics. Data is never overwritten, and any sequence of operations is either entirely committed or entirely ignored. This mechanism means that the file system can never be corrupted through accidental loss of power or a system crash. So, no need for a fsck equivalent exists.

      6. RAMChYLD Silver badge

        Re: Justice for bcachefs!

        The big one is tiered drive caching. You can buy a 64TB hard drive, a 2TB SSD, and the system will intelligently swap data between the HDD and SDD to give an appearance of speed. Highly useful if you edit videos and then archive your footage to the same drive, or if you have a huge game library- the games that you play frequently will always load at SSD speeds, while the games you buy on Steam sales will be readily archived until you're ready to play them, in which then after an initial slow start they will speed up as data is being cached onto the SSD. All this without needing user intervention.

        Also bcachefs makes it trivial to do RAID like ZFS does. Where before you need to fiddle around with LVM or {dm,md}raid and then build your ext4 on the resulting volume, which is tiresome. Bcachefs like ZFS ties it into the same layer as it is running so there's less to no overhead.

        And you can combine the RAID feature and tiered storage feature together to get a blistering fast storage solution.

  10. powershift

    Oh I don't want to feel bad

    I didn't see anything offensive from Overstreet in the article. If the btrfs dev asked for specifics, I'd think he would get them. Arnold says it best 0:09 - 1:37 https://www.youtube.com/watch?v=pwqxdCGCDE8

    1. Anonymous Coward
      Anonymous Coward

      Re: Oh I don't want to feel bad

      That's because the article is glossing overstreet's persistent refusal to follow basic patch submission procedures, along with his high-handed approach to any criticism of his behaviour or submissions.

      The article tries to frame it as a clash of personalities, as if it's an entirely subjective emotional issue on the part of the kernel developers, but the reality is that overstreet is (perhaps deliberately) refusing to conform to the technical requirements for participation.

  11. This post has been deleted by its author

  12. Anonymous Coward
    Anonymous Coward

    humans, can't live with them, can't live without them

    It's amazing how the same interpesonal issues crop up in all groups, e.g., English Lit depts, HOA boards, Linux Kernel groups, churches, when members promote competing ideas.

    A Spiegel article a few years interviewed a researcher who studied chimps for over 20 years under Jane Goodall. He said if 300 chimps were seated in an airplane the way humans routinely travel, by the time the plane landed, there'd be live dismembered and dead chimps all over. He's amazed that for the most part, we, one of the great apes along with chimps, gorillas, orangutans and bonobos, get along well enough to not do that (very often) even though our closest relative behaviorally is chimps (both chimps & bonobos for DNA).

    I can't imagine what forking chaos (kOS) will ensue when Linus Torvalds retires or dies.

    From Perplexity.ai:

    "FreeBSD’s development and maintenance are *not* tied to a single personality as much as Linux’s are. Instead of relying on one central figure, FreeBSD is managed by an **elected Core Team**—currently made up of nine members—who serve as the project’s "board of directors." This team is responsible for the overall direction, management, and key decisions affecting the project. Elections for the Core Team are held every two years, and members are chosen from the pool of active contributors (committers).[2][3][5][9]

    - **Key differences from Linux:**

    - *FreeBSD* employs a more formal and collective leadership model, with project goals and decision-making distributed among the Core Team.

    - *Linux* kernel development, by contrast, is notably led and controlled by Linus Torvalds, and major decisions regarding direction, features, and merges are ultimately up to him.

    This structure means that FreeBSD is less centralized around one personality, making it less vulnerable to the influence or absence of any single individual. The project is further subdivided into teams responsible for specific areas (security, documentation, ports, release engineering, etc.), each with its own maintainers who can be replaced if inactive for extended periods.[1][3][5]

    In summary, **FreeBSD’s leadership is distributed and organizational, whereas Linux’s is more personalized and centralized under Torvalds**. This distinction makes FreeBSD’s project governance more resilient to individual changes.

    [1] https://download.freebsd.org/doc/en/books/dev-model/dev-model_en.pdf

    [2] https://www.zenarmor.com/docs/linux-tutorials/freebsd-vs-linux

    [3] https://www.freebsd.org/administration/

    [4] https://forums.freebsd.org/threads/what-are-the-benifits-of-freebsd-over-linux.67994/

    [5] https://en.wikipedia.org/wiki/FreeBSD_Core_Team

    [6] https://www.reddit.com/r/freebsd/comments/mt9w09/why_is_linux_still_more_popular_than_freebsd/

    [7] https://freebsdfoundation.org/about-us/our-team/

    [8] https://klarasystems.com/articles/choosing-between-freebsd-and-linux-a-choice-without-os-wars/

    [9] https://wiki.freebsd.org/HowToBe/CoreMember

    [10] https://news.ycombinator.com/item?id=41732415

    1. containerizer

      Re: humans, can't live with them, can't live without them

      > This distinction makes FreeBSD’s project governance more resilient to individual changes.

      Not trying to troll, but it's probably also why FreeBSD hasn't captured mindshare in the way Linux has. Linux did have a head start, of course, but people didn't feel strongly enough about the governance model to walk away from it.

      It's fair to say that the benevolent-dictator-for-life model falls when the benevolent dictator is an idiot. That happened with GCC/EGCS a long time ago, and I'm sure in other cases too.

      1. jake Silver badge

        Re: humans, can't live with them, can't live without them

        "Linux did have a head start, of course"

        1BSD was released in 1978.

        1. Orv

          Re: humans, can't live with them, can't live without them

          True, but if we're talking about desktop computers, the more relevant comparison is 386BSD, which was released in 1992.

    2. druck Silver badge

      Re: humans, can't live with them, can't live without them

      If you have valid points to make, don't devalue to post by quoting an AI.

  13. remainer_01

    Eh, sorry, what?

    >> The whole incident emphasizes the extent to which these ostensibly technical debates are often settled by personality and emotion, rather than by technical excellence

    Have you really worked in IT (or in any other field, really)? This is mostly par for the course.

    1. PRR Silver badge
      Devil

      Re: Eh, sorry, what?

      > ostensibly technical debates are often settled by personality and emotion, rather than by technical excellence

      My thing is radio and engines. Historically a LOT of technical decisions get settled on non-technical factors, rant and bullying.

    2. Moldskred

      Re: Eh, sorry, what?

      It also seems a bit optimistic to think we can accurately and objectively measure "technical excellence."

      1. JoeCool Silver badge

        Re: Eh, sorry, what?

        Of course you can

        1. enumerate and summarize the internal/technical choices.

        2. list and categorize the tradeoffs, in terms of external/functional impacts

        3. assign a weighting to each category, and a score for technical impact.

        4. calculate the best choice from the weights and scores

        Otherwise, what would be the point of making technical choices ?

        1. demon driver

          Objectivity

          Objectivity is an illusion. All 'summarization', all attempts to 'categorize', every 'list of [...] impacts', every 'weighting', every 'score' tends to be – at least partly – subjective and often also incomplete. Of course that doesn't mean we shouldn't even try, but we shouldn't overestimate what it actually is that we're doing and achieving.

    3. Liam Proven (Written by Reg staff) Silver badge

      Re: Eh, sorry, what?

      > This is mostly par for the course.

      True.

      I just want to counter the perception of giant brains solving hard problems for all mankind, that type of thing.

      1. C R Mudgeon Silver badge

        Re: Eh, sorry, what?

        "I just want to counter the perception of giant brains solving hard problems for all mankind, that type of thing."

        If only giant egos didn't keep getting in the way. (I take no position on *whose* ego(s) is/are more responsible for the current fracas. I haven't followed it closely enough to have an informed opinion -- though I imagine that as usual, there's blame to be had on both sides.)

        A delightful counterexample comes to mind: when the UUCP Mapping Project, which provided a number of services for folks who couldn't at that time get direct access to the Internet, decided in 1989 to discontinue some of those services. That announcement reads, in relevant part (emphasis added):

        The reason for setting up the UUCP Zone originally was to make this service available to organizations on the UUCP network who are not able to join networks such as ARPANET. Since Rick Adams has offered to have UUNET perform this service, there is no justification for us to continue operations. If we were running as a commercial enterprise or a government bureaucracy I suppose we could find reasons to continue, but we began as a service to the community--and the best service we feel we can do now is to avoid redundancy. Therefore, effective immediately, we have discontinued all new UUCP Zone registrations and are referring inquiries to UUNET.

        The one service they did continue to provide -- the UUCP maps themselves -- would be wound down in the second half of 2000, and for the same reason: the Internet had become ubiquitous enough that it was doing the job better, so the Mapping Project had outlived its usefulness.

        @Liam Proven: the UUCP Network and its Mapping Project are an important bit of computing history. Maybe a topic for an article during this 25th anniversary of its passing?

  14. Anonymous Coward
    Anonymous Coward

    Handbags at twenty paces ?

    Sort of descends into low comedy.

    Cow file systems ButterFS gone rancid and the cultured alternative YoghurtFS sour.

    Lactose intolerant ?

    Could be worse KumisFS. :)

    Ultimately all this nonsense is Linus' fault because by his choice (for understandable reasons) there is no stable kernel interface (kABI ~ DDI/DKI) against which file systems could be routinely developed, presumably as loadable kernel modules, and in production remain, outside the kernel source tree.

    (In this case I would keep a half brick in my handbag as an emperor penguin is a formidable adversary. :)

    † aka bcachefs. ‡ I immediately thought of Linus when reading Dagger Beaks and Strong Wings.

  15. Moldskred

    I don't know much about kernel development or filesystems and do not have a bone in the fight, but I have to say, Proven's comment doesn't seem particularly worthwile. He simply posits that Overstreet is in the right, that it's a large mistake to omit the July changes from 6.18 and that this was done because of a clash of personalities, without any explanation or further support for them. If you're going to publish a comment that someone's decision is a mistake, at least argue your case!

    1. Liam Proven (Written by Reg staff) Silver badge

      > Proven's comment doesn't seem particularly worthwile.

      I thought I did argue it. I attempted to make it as clear as I knew how.

      I have seen multiple articles saying "bcachefs is out" or "will be out", and why it happened, but none that dig into what that means and why it is bad news.

      I have never met or talked to Torvalds, or T'so, or Overstreet, or most of these people, but I read lots of verbiage about how KO is abrasive and nasty and mean and doesn't play by the rules.

      This is a misrepresentation and it's unfair.

      [a] He is vocally critical of substandard code and substandard implementations and open about existing real problems.

      That, IMHO, is a good thing. We need more of it.

      Many people, especially from an American cultural background, are very uncomfortable with bare open criticism like that. Well, tough. It is needed.

      [b] The abuse he _gets_ is FAR worse than that which he gives. He is abused and attacked constantly, across the Linux world, and so are his co-workers: the Rust-in-Linux people, the Asahi people, and so on.

      Filho quit. Hector Martin quit.

      Lots of brilliant developers are quitting and walking away because grumpy old men are grumpy.

      Again: tough shit for the old precious delicate flowers who are in charge.

      If some punk kids come along and say "U R DOIN IT WRONG" and prove it by doing it better, then suck it up and accept and learn. Pull their code, up your game, do better or get out.

      I want to see more smart kids doing smart stuff and I want to see more old men who can't keep up quitting instead.

      1. Moldskred

        I think you made a compelling argument that there is a conflict of personality and that Oversteer has been on the receiving end of unfair criticism, but the leap from there being a conflict to that conflict being the reason that Oversteer's patch was not included in 6.18 and that the decision to not include it was a "large [...] mistake in the kernel development management process" doesn't really seem to be bridged.

        1. m4r35n357 Silver badge

          Repeatedly misspelling peoples' names is really clever.

          1. Moldskred

            Apologies, my bad. That wasn't intentional; I must have misread the name.

      2. zimzam Silver badge

        I'd be able to take your argument more seriously if you demonstrated that you had any idea why bcachefs was dropped. It had nothing to do with conflicting personalities, it had to do with him thinking his development process was more important than everyone else's. Submit new features in the feature window and not the RC window and everything would be fine, but he *repeatedly* refused to do that. As I said above, if he wants to iterate quickly on his project that's fine, but do it on your own time. It doesn't need to exist in the kernel right now if it can't work at the kernel's cadence.

      3. GuldenNL

        I'm still laughing at a manical volume from your statement, "Many people, especially from an American cultural background, are very uncomfortable with bare open criticism like that. Well, tough. It is needed."

        I've lived in Windsor, Naarden NL, Weinheim DL, Launceston TAS Oz, Singapore, several places in the USA.

        Your statement really means that Americans (and neither do the Dutch or Aussies) don't suck it up and not speak up when 'bare open criticism' is directed their way. If you ask anyone from these countries who does suck it up and instead sulks off, you might be suprised.

        As someone who adopted Linux back in '93 while living in Naarden and saw Linus' hilarious nearly daily tirades, it's my observation that the planet of Linux has held many citizens of the Kingdom of Butthurt over the past three plus decades, and they originated from virtually every country on Earth.

      4. Jamie Jones Silver badge

        I know nothing about Overstreet or the issues, but I think that generally, it's more often the case that the new punk kid on the block cluelessly throws away everything learnt, and comes out enthusiastically with some cool solution that doesn't work.

        So in the case where the new kid is RIGHT, people unfairly automatically believe the old folk.

        It was very similar when a young Linus proposed his Kernel - he was ridiculed by "those in the know" for supposedly not understanding the issues that would be involved.

      5. demon driver

        Acceptable ways of interacting

        > I have never met or talked to Torvalds, or T'so, or Overstreet, or most of these people, but I read lots of verbiage about how KO is abrasive and nasty and mean and doesn't play by the rules. This is a misrepresentation and it's unfair.

        I agree with a of a lot you're saying here and I'd wish as much as you that bcachefs would quickly become and stay a fixed part of Linux, but I have read lots of Overstreet's own verbiage and the impression remains that too much of it /was/ "abrasive and nasty and mean" enough (and, within the confines of Linux kernel development, for the bits I imagine to have at least slightly understood including the criticism from others, wrong enough) to justify at least some of the reactions.

        If we're looking at the clashes we've seen there, enough of it would – for good reasons – justify dismissal in a workplace environment without any works council having the means to stop it.

        The 'punk' attitude to not respect your elders just because they're elders is perfectly ok, but if the world wants open source development in general, and Linux kernel development in particular, to continue to thrive and be attractive to today's and coming generation of developers, it must adopt and, to some extent, guarantee an acceptable way of interacting with each other. Whatever one might say about other participants, too, in the long run I'm pretty sure it will prove more valuable to have delayed even a very important project than to let unacceptable behavior slide.

      6. l8gravely

        I've met Ted and even had dinner with Linus many many many moons ago as part of a group. Neither will remember me. But both were well spoken, able to take criticism, and still be graceful. Ted moreso than Linus as times. Kent is another talented programmer, he really is. But he's a total tool when you make a comment he views as against him or his code. Once he's decided the right way, it's his way come hell or high water.

        Which isn't the way to collaborate with other people well.

        And honestly, I've been watching the various mailing lists and Kent is getting better... but I think he burned one too many bridges and Linus is either going to make him sit out a cycle and then merge it for 6.18 and hopefully Kent will learn the lesson. Or his will just 'git rm' the entire sub-tree and purge it all for the next release. I hope it's the former, and I hope that Kent realizes this and gets his ducks in a row and all his patches and testing done so that when the NEXT merge window opens up, he can big there with a well explained patch set that's been in linux-next tree and doesn't do stupid crap like dropping major patches into release 4 or 5 of the window.

        Kent's reasoning was that "I needed to post these patches because people had data corruption and this was the only way to get them working again!" which is BS. Anyone running 100tb of data on an experimental filesystem deserve to lose data and should expect to lose data. And they can god well compile their own kernel with out of tree patches to get their data back. At that point to send in a patch to disable the bad feature. Thne you can send in a second proposed patch that could either go in now to fix it, or be pushed out to the next release as a real fix.

        It's about patience and an understanding of the process.

    2. VoiceOfTruth Silver badge

      Or you could follow some of the links and read for yourself. That is what hyperlinks are for.

      Proven is right with what he wrote here: "The abuse he _gets_ is FAR worse than that which he gives."

      How do I know? Because I read it on LKML. And that is only what is public.

      And here: "Lots of brilliant developers are quitting and walking away because grumpy old men are grumpy."

      When I wrote earlier about "not invented here", this is what I am referring to. Somebody tried to challenge my comment with noise about NFS and CDROMs and so on. They didn't get my argument, so perhaps I should be more explicit. The NIH are the grumpy old men who don't like somebody coming along and showing them how it could be done better. The fact is there are known problems with BTRFS, and some people do not even acknowledge it. When KO calls it out, they could say "you're right, but we're working on it, or we don't have plans or resources to fix it, or we don't know how to fix it". Instead they get annoyed with somebody pointing out faults (however pointedly he does it). These same people are happy to point out faults elsewhere, and be rude about it.

      The old guard are a bit too comfortable in their slippers. These whippersnappers telling us how it should be done. Hrrumph.

  16. JLV Silver badge
    Unhappy

    Silly me

    (reads the title and daydreams a bit)

    > Linux is about to lose a feature – over a personality clash

    Yessss! Poettering finally annoyed one too many folks and systemd is out.

    (wakes up and realizes Kate Upton is not this evening's date)

    Uh, sorry, probability about on par with Beast666 moving out of his mom's basement in St. Petersburg or finally having something semi-intelligent to contribute.

    Back to reality, yo! Clashing nerd egos driving technical strategy. Good coverage, vultures.

    Yes, yes, I know systemd is not in the kernel.

    1. Anonymous Coward
      Anonymous Coward

      Re: Silly me

      Just wait until Windows 12.

  17. Taliesinawen

    No More Bcachefs in Linux?

    No More Bcachefs in Linux?

    From: Linus Torvalds <torvalds@linux-foundation.org>

    To: Kent Overstreet <kent.overstreet@linux.dev>

    Cc: linux-bcachefs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kerenl@vger.kernel.org

    Subject: Re: [GIT PULL] bcachefs fixes for 6.16-rc4 Date: Thu, 26 Jun 2025 20:21:23 -0700 [thread overview]

    Message-ID: <CAHk-=wi+k0E4kWR8c-nREPO+EA4D+=rz5j0Hdk3N6cWgfE03-0@mail.gmail.com> (raw)

    In-Reply-To: <andf2izzsmggnhlqlojsnqaedlfbhomrxrtwd2accir365aqtt@6q52cm56jmuf>

    On Thu, 26 Jun 2025 at 19:23, Kent Overstreet <kent.overstreet@linux.dev> wrote:

    > per the maintainer thread discussion and precedent in xfs and btrfs

    > for repair code in RCs, journal rewind is again included

    I have pulled this, but also as per that discussion, I think well be parting ways in the 6.17 merge window. You made it very clear that I can't even question any bug-fixes and I should just pull anything and everything.

    Honestly, at that point, I don't really feel comfortable being involved at all, and the only thing we both seemed to really fundamentally agree on in that discussion was "we're done".

    Linus

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: No More Bcachefs in Linux?

      > No More Bcachefs in Linux?

      Wrote about that over 6 weeks ago:

      https://www.theregister.com/2025/07/01/bcachefs_may_get_dropped/

      Do try to keep up, eh?

  18. ncarruth

    Some additional context

    I think this article is missing some context (most of which is in the linked article from July, here: https://www.theregister.com/2025/07/01/bcachefs_may_get_dropped/).

    bcachefs isn’t getting dropped just because Kent criticized btrfs. As the previous article linked above suggests, and from what I can tell from LKML and LWN posts, Kent wants bcachefs to be something like an autonomous fiefdom within the kernel, and Linus said No. This happened more than a month ago.

    Given the level of control Kent apparently wants over the code — after all, bcachefs is his own project! — perhaps developing out-of-tree, either as a kernel module or via FUSE, is more appropriate, though obviously less convenient for users.

    (Also, the late patch wasn’t the one this current article mentioned, but the one discussed in the previous article above.)

    1. JLV Silver badge

      Re: Some additional context

      The Linux Experiment

      https://www.youtube.com/watch?v=jSNgS9siL44 about a month back. 1:30 into the video, this gets covered.

      According to the youtuber, this was a case of Kent having frequent issues conforming to kernel dev procedure. In this case, submitting a code change for a new feature during the release candidates phase of the development window. When called out about it, he pushes back rather than just follow the rules.

  19. Lunardr4gn

    The Dr4gn's thoughts on this whole thing

    I'm not gonna pretend I know anything about btrfs, and what its intended purpose is, but I will say that the statement about it chewing up drive space is pretty true. From personal experience it has too many features and "selling points" for me to trust it very much. Jack of all trades, master of none, and all that.

    My go-to filesystem is XFS nowadays. It's stupidly fast, and has been very stable for me. If it was good enough for SGI, it's good enough for me.

    That's not to say that EXT4 isn't any good though. I recall ext4 being incredibly stable for me when I used Ubuntu maybe 5 or 6 years ago.

    I will say that it's mind boggling to me that some of these devs are so eager to spit venom at eachother over these sorts of things. Maybe I'm just out of tune with the Linux developer side of the community. (Thanks CachyOS community for not being the most toxic place in the world...).

    1. ThoughtCrime

      Re: The Dr4gn's thoughts on this whole thing

      "chewing up drive space"??

      It's a copy-on-write filesystem. That's what it's supposed to do! It's certainly not using nearly as much space for metadata as I would have expected. Bcachefs isn't going to chew up significantly less space unless it's easier to identify files that need noCOW and to exclude them.

  20. r2db

    It's about rules

    The issue with the developer is not actually about a personality conflict. That isn't helping, but more importantly he wants the rules of kernel development to not apply to him. He wants his code to not be audited, and just merged without question.

    Nobody who thinks like that should be allowed near the kernel of the OS that literally runs the modern world.

  21. wpeckham

    Missing the point

    Developers made two points here and most comments ignore both.

    #1 Development in a company is driven by projects and dollars. Development in the Kernel is driven by community! A toxic member of the community cannot be, and should not be, trusted.

    #2 To a developer features are a nice ting to pursue, but the gold standard involves correctness, elegance, and MAINTAINABILITY! You might like that greater feature set, but if it does not integrate with existing code safely or does not present in a way that the other developers can maintain then it is a trap. Using bad or misleading code is to set landmines on your yard. Don't.

    Choices must be made, and making them in a way that supports and strengthens the community, the philosophy, the standards, and the product is always the RIGHT choice. Even if you do not like it.

    And does it really matter if a feature takes and extra cycle to implement to make sure everyone is happy with it and the way it is implemented? It never really has before, so why now? I am willing to wait for it to be done RIGHT, instead of just fast!

  22. Fido

    The cow that won't eat your data

    I found the slogan "The COW filesystem for Linux that won't eat your data" funny and brilliantly juvenile at the same time. Yes, I know not everyone has the same sense of humour.

    For me "The COW filesystem for Linux that won't eat your data" is a pun based on how much a real cow eats every day in contrast to how difficult it is to create a bug-free copy-on-write filesystem with live snapshots, compression, redundancy and data integrity features.

    Humour is so difficult many people don't even try; yet a smile and laughter are so essential to human health that in spite of a cancel culture aiming to eliminate people for any mistake, there are people who still try to make jokes. Maybe that is part human nature.

    Anecdotally I've had more trouble with thinly-provisioned LVM volumes than BTRFS. Of course everything was backed up and the failures can be attributed more to my misunderstanding than bugs in the code.

    I can understand the need for a team that gets along and works well together. At the same time I hope the development of Linux remains based on technical merit.

  23. Bitsminer

    Need for Governance

    When I complained about the lack of technical strategy and governance over the Rust integration issues I was seriously downvoted.

    Yet here we are again with more fractures appearing in the structure.

    The "toxic behaviors" can clearly be remedied. Linus is example number 1.

    The internal interfaces to for device drivers and file systems can be clearly defined and changes approved in a phased way taking different stakeholders into account.

    Other unmanageable people can be layered to minimize their blast radius.

    Actual evidence for kernel correctness and/or bugs in the form of fullon testing, by independent testers could be adopted.

    Feature management could be added to help support legacy hardware while maintaining compatibility.

    In short, governance of the features and processes (and people) comprising the kernel and its development.

    Yeah. I already hear the complaints. "Too much corporate style overhead."

    And the alternative, for a sustainable and improving and growing kernel is what? More of the same?

    Just how long will that last?

  24. ryokeken

    he calls it a feature lol

    mate aint not biased at all lol

  25. Jamie Jones Silver badge
    Thumb Up

    Kent Overstreet wrote:

    "30 years ago, Linux took over by being a real community effort. But now, most of the development is very corporate, and getting corporate developers to actually engage with the community and do anything that smells of unpaid support is worse than pulling teeth - it just doesn't happen."

  26. boatsman
    Pint

    its my data being safe is what I am concerned about....

    not what the brand / name / concept / ingenuity of the FS is

    xfs : dead in the water.since the prime genius behind it is behind bars as well.

    btrfs: irrecoverable errors, data loss. that was *my* data, not somebody elses. I was not the only one, clearly.

    I had a backup, sure. but backups are there to recover from hardware failure, human failure (deleted something i should not have deleted )

    not to waste time on a FS I cannot trust.

    that leaves me with ext4 as the only, proven option.

    maybe not so fast. not so fancy. But it does have point in time restore ( snapper) to catch horror scenarios...

    and it does not junk my data.

    1. Androgynous Cupboard Silver badge

      Re: its my data being safe is what I am concerned about....

      > xfs : dead in the water.since the prime genius behind it is behind bars as well.

      That would be ReiserFS you're thinking of, unless someone at SGI also made some poor life decisions.

  27. boatsman
    Coat

    nothing to do with personality clash. read the thread.

    its this Overstreet person who has it in the wrong order.

    if there is rules, and you want them changed, you can try to get them changed.

    if not succeful, the rules stay as they are.

    Overstreet thinks he is above that.

    thats ok if nobody 's work gets hurt.

    But, thats not the case. the kernel is the beating heart. And his baby (bcachefs) is experimental.

    He wants the privilege to possibly screw up the kernel, the work of 3000+ other people.

    can't have that, is what Linus and everyone else in that discussion is saying.

    that is all there is to it.

  28. ThoughtCrime

    Btrfs works for me

    I did a new install (I don't even remember what distro, now--I try new ones monthly) and got BTRFS by default. Loved it, haven't looked back.

    I'm not buying the whole "btrfs eats data", argument. Meta isn't running everything on btrfs if it isn't reliable. I've lost data on filesystems before. There isn't a complex piece of software in existence without bugs. Is ext4 _more_ reliable? Probably. Is checkpointing and COW worth the extremely small possibility of losing data that I wouldn't have lost if I used ext4 instead? I think not.

    1. Anonymous Coward
      Anonymous Coward

      Re: Btrfs works for me

      Meta had enough cash to have failovers, duplication, and backups up the wazoo. They don't NEED btrfs to be 100% reliable.

    2. pwl

      Re: Btrfs works for me

      Possibly openSUSE, where it is the default, as it is for SUSE Enterprise, where it seems to work for most users without serious incident.

  29. l8gravely

    Kent; great programmerl; lousy person

    I've had my own runins with Kent. I absolutely an amazed at his coding ability and focus. But he once asked for some comments on the useabily of bcachefs tools and such and I gave him my opinion, carefully noting that I was not a programmer and this is from my IT hat as a Unix SysAdmin. Boy did I get flamed for providing useless feedback that wasn't all praise and hallegleuyahs. (not going to bother spell checking that!) and so I'm a bit off on bcachefs at this time.

    But I do like the ideas it has, and the scalability and no-fsck really needed, etc. It's not as fast as hoped, but honestly people have some many special cases in filesystems for directio, threading, and other hacky ways to get more performance for their special sauce which can't be changed for some reason, that I know writing a filesystem is a chore. And with all the new features people want/need, it's hard to be performant.

    So yes, Kent is a prickly pear who can't/won't/hasn't yet learned how to deal with people and pushback in a graceful manner. This is his problem. It might be our loss... but it's his problem to solve.

  30. Andrew Williams

    Who wrote this?

    Because it reads as written by a person who is associated with the bloke who got his submission dropped like it's hot.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like