
Catchup
Linux playing catchup to FreeBSD I see! *troll icon*
The next minor version of OpenZFS is nearly ready, and ZFSBootMenu makes it easy to boot Linux from it, via a clever workaround. The advanced OpenZFS filesystem is getting close to its next release, version 2.2, with release candidate 3 (around this time last year, OpenZFS 2.1 got to rc8, so it might be a little while yet). …
I cannot for the life of me understand why the kernel dev team keeps changing the API around just to spite OpenZFS.
And it seems like they're trying to push BtrFS as the end-all OpenZFS challenger that will kill OpenZFS. Stop it. Setting up a SoftRAID array with NVMe cache using BTRFS is a f**king nightmare that requires me to remember the stupid long UUIDs of my disks!
Everytime a new kernel comes out, every single rolling distro I use buckles and I can't log in because my /home is on the ZFS volume.
I'vr had th same set of files with zero loss of data for over 10 y ars, a Ross multiple hardware upgrades disk upgrades and failures both normal (disk dying) and preventable (tripping over a bundle of Sara cables).
Zfs has always recovered and just kept going like nothing has happened.
It's easy to install these days, Debian just has packages for it.. not sure what the article is on about there (nobody has used the userspace version since the earliest versions, and the kernel modules were ok GitHub before the distributions started packaging them).
I don't do zfs root mostly because the OS is easily replaceable anyway, and dedicating multiple drives to it seems overkill.
On my home server, according to zpool history:
2011-04-14.16:22:14 zpool create filestore raidz ada1 ada2 ada3 ada4
Since then, all of the 2TB drives have been replaced, one at a time, with 4TB drives too, and the pool grown to suit once all were replaced. And the ZFS version/filesystem updated a few times. I'm happy with it, it's very robust.
FreeBSD, of course. And that's been upgraded across both minor and major versions as well.
I have been using ZFS on-and-off for a number of years, mostly off in the last 2-3 to be fair.
The thing I never quite got my head around is the concept of dropping a drive for a graceful replacement. With LVM I add a new PV to the VG then do a `pvmove` command to move the extents from the drive I'd like to replace to the new (possibly larger) drive. Then I can safely remove the old PV from the VG. Repeat that with all PVs then resize2fs.
Could anyone describe a similar workflow with ZFS? I remember getting confused and concerned when adding the new drive to the zpool and the zpool size instantly grew to include it, rather than preparing to then remove an older drive. IIRC I ended up copying all the data off to another physical device then rebuilding the zvol/zpool or whatever and copying back to ZFS. I was probably just "holding it wrong" and need to RTFM but if anyone can point me to the way of doing this with ZFS?
To enlarge a ZFS pool by replacing drives, you need to replace each drive in a zvol with a larger one. Assuming your zvol is composed of one or more 2-wide mirrors, you would add the new drive to a mirror, wait for resilvering to complete, drop one of the two existing drives from that mirror, and repeat for the other. Here, you created a new 1-wide mirror, which is indeed a pain to recover from and not an uncommon error (especially when attempting to add a cache disk).
If you don't actually have any free drive bays, you can use an external dock to resilver the new drive or YOLO drop one of the existing mirror drives to add the new drive in its place.
Dan Langille has written up this procedure on his blog.
Sounds like you used "zpool add" with a view to using "zpool remove", which would then evacuate the drive to be removed. You probably want to instead mirror the drive first using "zpool attach", then remove the old drive from the mirror with "zpool detach", possibly with a "zpool online -e" to expand the pool if bigger. Definitely RTFM first and monitor the resilver using zpool status in between.
you can do this but it was sufficiently long ago for me that i’ve forgotten the steps (i was doing it on solaris, to give a timescale)
it’s something like - mirror to the new larger drive, remove the old drive, then change back to a single-plex mirror (ie : not mirrored). the pool magically grows to the new disk size.
i recall it being magic enough that it just worked
(I think you meant "vdev"; a zvol is something else.)
It sounds as though you added the drive as a new vdev, intending to then remove the old vdev. The problem is, removing a vdev can only be done in some circumstances -- and for the longest time it was flat out impossible.
It is possible, however -- and always has been -- to freely add and remove mirror drives *within* a mirrored vdev. So as others have said, the usual approach is to attach a new, larger drive to the vdev, making it an (N+1)-way mirror; wait for it to resilver; detach one of the old drives, making it N-way again; then repeat, one drive at time, until they've all been replaced. At that point, either the new space becomes available automatically or there's a command to do that (I forget which).
If the vdev isn't mirrored, but just a single drive (*not* saying I recommend this), the procedure is the same; it temporarily becomes a 2-way mirror.
RAID vdevs are a whole other story -- which I'm unable to tell since I've never used RAID.
As for removing a vdev from the pool, that can only sometimes be done. If the pool contains a RAID vdev, then none of its vdevs can be removed. (Weird, but that's what the docs say.) Also, all of the vdevs must have the same sector size -- which also means the same ashift value. There's at least one other restriction, which I forget. So it's super easy to get yourself into a situation that's a lot of grief to get yourself back out of.
The ashift thing bit me the last time I enlarged my mirror; hence the bold face :-/ I was able to recover using only the drives on hand, but it was a pain.
The first beta of TrueNAS SCALE 23.10 also includes OpenZFS 2.2, though it's a beta and based on OpenZFS 2.2.0 RC3 so I would only use it for testing purposes at the moment.
iXSystems, the company behind TrueNAS is a major contributor to the OpenZFS code and quite a few of the improvements in the latest OpenZFS are either done by iXSystems developers or by others but sponsored by iXSystems. I believe they were also behind getting the RAID expansion development back on track.