
Well, I'm with the Judean People's Front, so down with the People's Front of Judea!
The geek titans are clashing once again, and Linux supremo Linus Torvalds has warned: "I think we'll be parting ways" as of kernel 6.17. The latest installment in the continuing drama over the next-gen bcachefs filesystem is that Torvalds accepted the code, for now, but added a sobering warning: I have pulled this, but also …
including and especially those from Kent Overstreet I would conclude he has already resigned himself to the out of tree, DKMS route.
What I hadn't appreciated was that Kent is pretty much the sole developer for bcachefs which is a concern even if a wayward bus isn't on the horizon - just poor life decisions a là Hans Reiser, can torpedo a file system project.
To be charitable I can see that Linus has particular priorities in respect of the kernel generally and file systems in particular while Kent has similar file system priorities perhaps in a different order which cannot consistently map on to the development process.
ZFS while an out of tree file system it is really not a good example as its history as a proprietary file system and continued FOSS team development on three platforms makes ZFS unique.
ZFS is available as a kernel kABI (odd term, kBI surely) tracking package for the *EL family from at least one repository (as are NVIDIA drivers.)
I imagine if bcachefs were maintained as a DKMS and there was a clear need or demand then these repositories might also package it in the same way.
the thing that really stuck out for me in the LWN comments was Overstreet railing against the Debian bcachefs-tools maintainer
for balance, I would recommend reading the words of said maintainer: https://jonathancarter.org/2024/08/29/orphaning-bcachefs-tools-in-debian/
(&tl;dr: bcachefs-tools are, apparently, unmaintainable)
> bcachefs-tools are, apparently, unmaintainable
I don't think that is a fair summary. This is a little out of my areas of knowledge, really, but that is a summary of that _what_ and not the much more important _why_.
The bcachefs-tools are written in Rust. (Arguably, maybe, the whole filesystem should be. This is a goal but given what's happening maybe should be re-examined: it is clearly very immature as it is today in C.)
Rust is a young and not very mature language. Rust changes quite fast. New point releases emerge roughly every 6 weeks according to this:
https://en.wikipedia.org/wiki/Rust_(programming_language)#Versioning_system
New "editions" which for the sake of argument we can call stable versions every 2-3 years.
Debian is a stable distro famous for including only old versions of software and then patching that to keep it secure for 2-3 years.
This is an incompatible model with Rust code. There would need to be some measures to sync Rust editions with Debian major versions, and nobody is trying to do anything like that AFAIK.
Even if that were sorted, then the bcachefs-tools would have to be based on a stable Rust edition. That too could be a stretch.
These are the points that matter here, I think. "The tools are unmaintainable" is so sweeping it's not a valid statement.
Perhaps one of the BSDs would be a better place for bcachefs.
Many people around OpenBSD have been lamenting FFS for some time.
I don't know if GPL is a concern, but Apple once was near to bringing ZFS into MacOS.
Apple integration would certainly be quite the comeuppance for ejection from Linux.
No thank you.
NetBSD would be a valid target, it is very much an experimental, research, and 'embedded with your own team of engineers' OS I wouldn't want to use as my daily driver (I've tried, you very very quickly run into issues).
FreeBSD has decent functionality but already has enough issues in various components that make using it as an OS you can rely on to not surprise you as being challenging, and it has ZFS bundled which is fairly free of issues. OpenBSD tends to be very stable, to the point it is safe to run -current, even if FFS is slow.
I absolutely do not want :
a filesystem that is not ready for prime time. Reading the discussion it is very clear this is experimental.
Someone who cannot play well with others.
Linus is correct - it shouldn't be in the kernel at all. Stick it in FUSE. I don't care if it's slower, it's experimental : once all the bugs have been worked out *then* it gets to be promoted to kernel level code.
If end users are daft enough to use a file system marked as experimental as their production system it is not the responsibility of the rest of the community to support them. If data becomes corrupted you restore from backup.
> I don't know if GPL is a concern
I have written at length about that.
No, GPL is not "a" concern.
GPL is *THE* concern here.
The entire point of bcachefs is that it's a next-gen storage management tool, capable of volume management, snapshots etc., and that it's in C and it's 100% GPL and it can be built into the Linux kernel. Not any other kernel: solely and vitally Linux.
There are lots of rivals that are next-gen C-based and comparable on features.
OpenZFS, but it's licence-incompatible. No problem for FreeBSD, so it's in there and as a result that means FreeBSD isn't remotely interested in bcachefs.
HAMMER2 if you want even more radical and also want the BSD licence. It's part of Dragonfly.
NetBSD can use OpenZFS if you want.
OpenBSD isn't interested unless it's radically small and simple. Pity as IMHO it _badly_ needs something with logical volume management.
Red Hat has Stratis.
There are others like AdvFS out there, long ignored.
JFS is in AIX and 21st century OS/2, e.g. ArcaOS.
But the _point_ of bcachefs is to be native to Linux and built in.
If it's not built in then a lot of the point is eliminated and there's less reason any more.
Fine. But it's still not remotely ready for use except as experimental. Kent is apparently desperate. Nobody loves his ugly baby and the kernel gambit has done nothing to increase mind share. Honestly he's just Don Quixote at this point. Nobody cares. Lvm and the existing well established filesystems get the job done. Nobody gives a shit about the friction of a non-integrated solution. Of the few that do, they use zfs.
The main issue here seems to be Overstreet’s attitude, arrogance and sense of entitlement. He also comes across in some of his replies as a bit of an obsessive nutcase.
He believes he knows better than Linus and the other maintainers and doesn’t have to play by the rules that other maintainers have to. He then argues black is white even though he’s quite clearly wrong and it’s him who is the problem, but he won’t accept that.
People have clashes on LKML all the time, most of them begrudgingly accept the status quo and abide by it, but Overstreet is incapable of doing that and it just results in unnecessary conflict which he isn’t going to win.
Getting bcachefs kicked out of the kernel just ensures that less people will use it and Overstreet won’t have any influence to push for changes in the way that the kernel merges code.
If only he’d shut up, done as he was told, stopped being an argumentative idiot, the respect would eventually have been earned and he would be listened to, rather than being told to get stuffed. He’s done this to himself.
The thing that gets me is his argument about how important his work is and how both users of bcachefs need the new feature set (sorry, "bug fix") immediately, without any recognition that Linus is shepherding a much larger project and is accountable to many more users and developers than just Overstreet. Of course, this is the nature of the obsessive, being so focused on his one interest that his mind cannot pull back from the object of his fixation to perceive that there is a larger context and other viewpoints.
What I don't understand - and would love someone to enlighten me (!) - is why there is this massive thing around drivers being included in the kernel or not. Especially it seems to filesystems.
Why can Linux not have some mechanism like Windows where manufacturers or developers can distribute their own device drivers and users load the drivers they like.
Sure, there are problems with sketchy or unreliable drivers but that's on the developers and users to decide if they want to use them or not, rather than Linus as an arbitary gatekeeper of what is allowed.
Then the kernel could be much smaller and not need to include lots of drivers a lot of people won't need.
The VFS layer between kernel and file systems is complex, however, and not set in stone (unstable), which makes the file system maintainers have to scramble whenever a new kernel is posted.
What doesn’t help is that the buggy btrfs’ maintainers have been delegated as gatekeepers for file system code, which they abuse to sabotage the competition.
>>> The VFS layer between kernel and file systems is complex, however, and not set in stone (unstable), which makes the file system maintainers have to scramble whenever a new kernel is posted.<<<
Well it should be. Production-ready operating systems like FreeBSD guarantee KAPI compatibility between minor releases.
It may require a bit more planning and discipline, but whilst it's not sexy, it's the right way to do things.
There is, in general, no expectation for internal interfaces to be unchanged between Linux kernel versions. This means that binaries for out-of-tree drivers, the closest parallel to the typical Windows case, must be specific to a kernel version (DKMS exists to ease this, AIUI).
A technical advantage of this approach I can see that it becomes a easier to eliminate technical debt, keeping those interfaces simpler, and simpler generally means more reliable. Also, I'm not sure I can describe why but I think there's also something in bringing everyone together with a shared view of the whole picture. Encouraging FOSS and the advantages that come along with that might be a deliberate side-effect. Perhaps other reasons too.
I can see a couple of interpretations of what you were getting at by your "kernel could be much smaller" comment - I think this is either already achieved or is a concern for the distribution rather than end users. Most in-tree drivers can be built as modules that are dynamically loaded; this is extensively done for a general Linux distribution. A driver built as a module may be entirely absent from a system where it is not needed, part of an optional package to be installed in cases where it is - the burden here is on the distribution maintainers who build it along with the rest of the kernel.
Dynamic Kernel Module Support (DKMS) and Filesystem in UserSpacE (FUSE)
DKMS means recompiling a fair chunk of stuff every time you update the kernel or the filesystem driver. Takes a few minutes each time, but performance is just as good as if it were a mainline kernel module.
FUSE means the filesystem driver runs in userspace. That sacrifices some performance as there's a lot more context switches between kernel and userspace, but gains a huge amount of flexibility and stability - a FUSE driver can crash, get upgraded or replaced and be restarted without taking down the rest of the system.
>>. Why can Linux not have some mechanism like Windows where manufacturers or developers can distribute their own device drivers and users load the drivers they like <<<
That can be done already if you want. Personally I like the fact that you can install, and it "just works" - and you can even generally copy a whole disk to another different machine and that will work without having to reinstall or hunt down and configure drivers.
As for the size, there is no need for the kernel to include all those drivers, you can prune your kernel config, or even better, the drivers can be included as kernel modules that get loaded on demand.
I'm a freebsd person myself, and many modules are loaded automatically as soon as the system detected their corresponding hardware. Unused modules do nothing but use up a bit of disk space, and even then they can be deleted if required
I'm sure Linux does similar, though one thing, the kernel ABI with freebsd is guaranteed to be stable between non-major releases, (apart from a current problem with video drivers) - so that's about 2 years of stability, and closer to 4 when you consider that 2 major releases are supported at a time.
… but it’s clear every great project needs a leader and strong decision maker.
Linux is one of the greatest computer projects of all time which must mean Linus is one of the greatest project leaders of all time.
Cross him at your peril or come armed with irrefutable reasoning.
Clearly this fs project failed twice at that.
I can’t even be bothered to look it up.
Maybe someone will fork it. They wouldn't even have to do a lot of heavy lifting if Overstreet intends to keep developing it. They could basically follow along behind his work taking fixes and enhancements (after they're proven by sitting in his codebase for a while and not having to be regressed or fixed) for submission upstream.
If Linus and his lieutenants didn't accept some of that as is and it had to be tweaked to fit their standards, so be it, that would mean the forked version would slowly diverge from Overstreet's version. But it is clearly of benefit to have a filesystem in the kernel. Even Apple, with their Mach hybrid microkernel, has spent years slowly paving the way towards the day when they'll be able to move the filesystem and network wholesale out of the BSD layer to run in userspace, but isn't quite there yet. It is very hard to run a filesystem in userspace without a performance impact. Doubly so when it is trying to do a lot of other stuff like bcachefs is.
Kent himself is very clear that this is experimental, but seems to have a understandable but misguided responsibility towards users using it in a production system.
If a feature is experimental and there is a possibility of data loss, it is up to users to maintain decent backups, and not use it in production. Everyone else shouldn't suffer for their poor decisions.
Keep it in FUSE, or whatever out of the kernel tree, until it's stable. It is *normal* for in development filesystems to be slower with debugging information anyway.
> Ir has been a decade of fights why not just use Fuse?
Lots of reasons, enough to totally invalidate the project.
* You can't boot off FUSE.
* It kills performance.
* The point of it is to manage volumes (replace LVM2) and manage snapshots (including managing kernel versions)
* If it's outside the kernel (like ZFS) then it can't share the kernel's cache
To do its job, this needs to be right there in the heart of the system. If it's not, might as well not bother.
All I'm hearing here is 'it'll be much slower'. That is normal for an unproven file system.
Booting is not essential for an experimental file system
Using less memory is not essential for an experimental file system
Snapshots doesn't need to include upgrading the OS, until proven with other scenarios.
Prove reliability *first*, add the other features later. This is normal, ZFS boot support was only added to NetBSD recently (and even then it needed a special 'early init' switch otherwise boot failed, guess how I know this?). It's entirely possible to use both LVM and non LVM partitions in the same system too (which is what I do in FreeBSD. Although ZFS is 'stable' it is not recommended to use ZFS for a mirrored swap partition as it can fail under low memory conditions[1])
[1] This is a pain, and typical FreeBSD. The installer will not warn you about this, you have to 'know it' from a nebulous forum post or similar. Combining mirrored swap and ZFS data requires manual configuration outside the installer, oh, and the instructions for doing this on a GPT disk specify a ridiculously small EFI partition, and don't highlight in large letters 'mirror your EFI partition, or don't stick it in fstab as otherwise it will break redundancy'
And not bother is frankly the point. His project has no value to the ecosystem. Only in his warped mind is the current state of affairs in need of his solution. Which is buggy as hell.
Who gives a god damn if you can't boot off of it? That's why we have /boot and root filesystems that are small and essentially read only. The myopia over zfs on root is the same mental disease. It's a stupid thing to do ALWAYS!
You have a process to prevent chaos. The team has to follow the process or agree via whatever mechanism exists to change it. You can't have an individual no matter how gifted in a certain area breaking it. Unfortunately, that type of gifted can be 1 dimensional and disregard everything outside their focus and beliefs. I assume it's some sort of spectrum thing. I once came across this and a guy insisted on using a language and tools he considered best with total disregard for what the rest of the business was doing. The rest of the business had legitimate reasons. No end of explaining and requesting could get him to change, so he had to go.