
> The executive summary is that bcachefs is a next-generation file system that, like Btrfs and ZFS, provides COW functionality.
So I guess you'll need to decide if you want to run bcachefs or the udder one...
The latest stable Linux kernel, 6.7, is out and finally includes the new next-gen copy-on-write (COW) bcachefs file system. Linus Torvalds announced the release on Sunday, noting that it is "one of the largest kernel releases we've ever had." Among the bigger and more visible changes are a whole new file system, along with …
> So I guess you'll need to decide if you want to run bcachefs or the udder one..
Certainly something to ruminate on. Cud be a good thing.
File system transitions are tricky things to do, not something you want to burger up. Get it wrong and the boss gets properly cheesed off. Get it right and we’re all in clover. To skip it probably needs a good alibi. “Son,” says the boss, “don’t let me down now”…
Personally speaking I’d be sorry to see a decline in ZFS, which is a properly good piece of tech. Still, it may be time to moove on to pastures new, where the grass is greener.
I've moved my boxes from f2fs to ZFS because of f2fs's fragility regarding system crashes
It wasn't an easy decision but after having to to manually intervene on boot sequences for the Nth time thanks to power hits and an unstable graphics driver I finally bit the bullet and put ZFS on root after using it in non-boot FSes for nearly 20 years
[Author here]
> it's arguable that it's LInux that is encumbered with licensing restrictions.
That's fair enough.
OTOH, I suspect that the number of Linux deployments, especially if you count Android and ephemeral VMs, is ITRO 3-4+ orders of magnitude greater than all other xNix systems combined. So it's all relative, isn't it?
Sadly, it isn't the first time that something FOSS has been totally re-implemented because of licensing restrictions and it won't be the last.
How is that Linux’s fault?
It's slightly more true that the GPL has "licensing restrictions".
It might be more fair to just say the the two bits of software just use mutually incompatible licenses, rather than pinning one of them as being the more "encumbered" preventing integration with the other.
Your point that Linux was around first is neither here nor there.
They didn’t want it in a competing OS.
As said before, it was adopted by FreeBSD quite rapidly.
One can even argue the ZFS license is actually compatible with Linux and the GPL, but Oracle has stayed silent and never offered their opinion on those legal intricacies, and nobody wants to be the first to try it and risk getting sued.
Where was that point, because I didn't notice it...?
Here:
"developed ZFS for Solaris. They didn’t want it in a competing OS. They chose the license that achieves that." "How is that Linux’s fault?"
The AC implies it was Sun's/Oracle's responsibility to chose a license for ZFS that would be Linux compatible, and doing otherwise indicates they are intentionally trying to hobble other OSes.
Fairly sure SunOS predates Linux by a decade or so.
ZFS does not predate Linux. I don't see how SunOS enters into it.
[Author here]
> Oracle bought Sun. They still don’t want ZFS in Linux.
Actually I think this is a fair summary.
Oracle does not "want" ZFS in Linux. Obviously it would be a big selling point _for Oracle Linux_ if its distro included ZFS in place of Btrfs, and Oracle is pretty much guaranteed not to sue itself.
*But* if Oracle does it, then that means it's OK, and that means everyone else can do it, and then Oracle has effectively granted permission to the entire Linux industry. And I think it doesn't want to do that.
Obviously it would be a big selling point _for Oracle Linux_ if its distro included ZFS in place of Btrfs, and Oracle is pretty much guaranteed not to sue itself.
But they may be sued by others for using a non-GPL filesystem in the Linux kernel.
Here may != will, but it may be a reason for them not to try their luck.
But btrfs has no licensing controversy over (in)compatibility with the GPL, and ZFS does. And Oracle seems, quite deliberately, to be refusing to shed any light on that issue. Even at the expense of not seen to be supporting one of its own flagship OS features.
You do you; but for my part, I think a restriction that prevents anybody from taking the hard work done by members of the Free Software community with the intention for it to be available for everyone to enjoy, study, share and adapt, then adding some deliberate incompatibilities and turning it into a proprietary product that users are locked out of is actually a good thing.
[Author here]
TBH I don't know but I suspect it's the same. It applies to most Linux filesystems.
E.g. https://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits
If you want much longer filenames, time to look at:
[1] Using NTFS
[2] Switching to DragonflyBSD and HAMMER2
[3] Your life choices.
Yep, that is the link I threw around in the other thread upon the "Does Windows still limit to 255 character file names?". You obviously read that :D.
1: I am using NTFS (and can actually handle 32700 characters path length, though that only happened on customer NTFS filesystems).
2: Possibly on a NAS box, but very unlikely since my current NAS box works too well and is too versatile. (Linux Riser would work too).
3: I should have bought apple stocks when it was around 23 €-cent. I should have bought A LOT MORE AMD stocks than I did when they were below 2 € in 2016.
3.1: Software-RAID + long paths + deduplication + encryption + storage tiering + fine grained ACL + snapshots, the latter accessible from any Windows client since Windows 2000... I'll stick with NTFS+StorageSpaces, especially since Windows dedup is more effective than many expect. Worked fine for about ten years now, including OS upgrades on the "NAS" from 2012 R2 to 2016 then 2019 now 2022 without any problems (Edit: Including swapping 4*8 TB Storagespaces partiy against 4*12 TB one by one, and then increase the vdisk + filesystem without problems, and switching the hardware without needing to reinstall anything in 2016)... And the only data losses were my mistakes, not hardware or Windows fault.
Those limits are not what they seem to be, and, having managed much larger filesystems (5Pb+) I can tell you for certain that NTFS does not outperform in any of the metrics you seem to like. It is trivially easy to cause problems for native file management tools and network file sharing when handing combinations of characters or long file names, and the software raid is certainly no easier to manage than ZFS, which can store much, much more. I've actually kept a few drives around with curiously corrupted NTFS filesystems so I can demonstrate some of these problems on demand.
It's great if it works for you, but is it as good or better than Linux options? Testably no.
Yeah, but where is a need for more than 32700 characters in a path? Directory depth of at least 127, each around 250 characters + the actual file. Good enough for the whole lord of the rings trilogy as a path in NTFS.
You can mount a volume into a path instead of since Windows 2000. Very useful to have the D:\data\<division name> of a file server full with directories which are all a different volumes mounted there. Flexible too, since you can prepare the swap in background and then just switch the mount to the newer bigger faster volume. Network wide as DFS-N since Server 2003, allowing you to switch whole fileservers in the background much easier, but still have the same network path. (Steer clear of DFS-R unless it is mostly-read volume like NETLOGON or your network-install directory)
You might get that on the big jobs.
You can easily exceed the NTFS limit by accident and, more importantly, even more easily exceed the limit of SMB, Explorer and other file managment tools in Windows. I regularly run into problems with this when transferring file trees to a Windows machine during projects.
> I regularly run into problems with this when transferring file trees to a Windows machine during projects.
1. Learn robocopy. Can handle long paths at least since the Server 2003, never needed it in Windows 2000 or before. Or, to be exact, I used different workarounds for Windows 2000 and NT 4.0, and it did not happen that often back then.
2. Learn the \\?\c:\ (or \\?\UNC\Server\share) syntax. Can handle long path names since Vista/Server 2008 (if all updates are installed), for CMD/dir/xcopy etc. And since 2008 R2 with Powershell 5.1 it works in powershell too, but you have to use -LiteralPath on some commandlets.
You can use "Method 2" with most file managers as well.
Oh, you did not mention which way you transfer, but I suspect explorer-copy, which is not the best way in quite some cases, especially huge number of files (large files are OK).
And if your transfer millions of files, learn robocopy /create. No matter which filesysten the target has, this lowers fragmentation, especially fragmentation of the directory storage (not the contents of the directory, the clusters which hold data structures for the directory).
> Can’t do that with network shares
mklink /d c:\tmp\test \\server\share\directory
Requires admin rights though. And is system wide, not per user, just like with unix. (unless you to that within the userprofile or home directory).
> or hot-pluggable devices, as I recall.
Works with my NTFS / FAT32 USB sticks. I knew, but I just tried, and it still works. And Windows got very good at remembering which drive was mounted where.
> Or those things called “storage spaces”.
Of course it does. You create a pool, create a vdisk, on that vdisk a volume which you can mount anywhere, including into an NTFS path. Applies to "Storage Spaces Direct" too.
Thank you for spreading information which is so easy to falsify!
There's also two completely independent sets of file permissions in Windows. The separate teams responsible for the kernel and userland couldn't agree on one set. I assume it's a result of the NT 4.0 development cycle, where the delineation between what was a kernel responsibility and what was a userland one got broken by Gates insisting on performance over stability.
This has been a feature for a long time, but keep in mind that not every aspect of Windows treats it equally. I've seen some very strange bugs even under Windows 10/11 related to access or manipulation of files in folder mounted drives with both Microsoft software and that of 3rd party vendors.
Besides, is Microsoft wrong to assume that “26 drive letters ought to be enough for anybody”?
It's not MS that's making that assumption. If you want to mount all your partitions under root, that's been possible since the days of MSDOS.
Windows uses device names (you may be familiar with shares like c$ and IPC$ and devices like PRN and COM1). Windows has a default of assigning device names like "c:" to partitions, but even with DOS that was not enough for everybody and everything - that's just for people who don't care, never had to mount a tape drive, pipe, mailslot, stream or file folder, or whose in-depth knowledge of other systems is complemented by the shallowness of their knowledge of Windows.
No, it has “reserved” file names. And the rules for interpreting those are, shall we say, a massive, byzantine hack.
Also I have it on credible authority that the above attempt at exhaustive enumeration of the cases has managed to miss a few.
As a Gentoo user, I have the ultimate flexibility in choice, and here's the filesystem I select:
ext4
Yep. It does the job, and I need not learn about a bunch of stuff I'll never use to get it working. Of course, we speak of an humble single user installation, no NAS, cloud, container or whatever over engineered hype to make you feel important.
[Author here]
> I need not learn about a bunch of stuff I'll never use to get it working
TBH that was my attitude on my own Linux home servers for a couple of decades. ext* + mdraid does all I need, so why faff around?
Then I tried an experimental RAIDZ in a remote VM. Worked in seconds, on a slowish host... but, I thought, they were only tiddly little 20GB virtual disks I was RAIDing.
So I tried at home, on a Raspberry Pi 4 during lockdown. It took 45min to compile the ZFS modules because I suppose Canonical didn't think anyone would be using ZFS on a Pi.
But creating a 6TB array from 8TB of USB3 spinning hard disks still only took a couple of minutes.
It is much, much simpler and easier than mdraid and ext4, and the tooling is better. I am not often as impressed as I was by ZFS on Ubuntu.
That same array -- with some (but not all) different disks in it -- is still running today, on an HP Microserver running TrueNAS Core. Importing it was amazingly quick and easy, too.