back to article OpenZFS 2.2 is nearly here, and ZFSBootMenu 2.2 already is

The next minor version of OpenZFS is nearly ready, and ZFSBootMenu makes it easy to boot Linux from it, via a clever workaround. The advanced OpenZFS filesystem is getting close to its next release, version 2.2, with release candidate 3 (around this time last year, OpenZFS 2.1 got to rc8, so it might be a little while yet). …

  1. Anonymous Coward
    Anonymous Coward

    Catchup

    Linux playing catchup to FreeBSD I see! *troll icon*

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Catchup

      [Author here]

      > Linux playing catchup to FreeBSD

      You are not wrong.

      1. Anonymous Coward
        Anonymous Coward

        Re: Catchup

        Damn. I shouldn't have posted anonymously!

    2. RAMChYLD

      Re: Catchup

      I cannot for the life of me understand why the kernel dev team keeps changing the API around just to spite OpenZFS.

      And it seems like they're trying to push BtrFS as the end-all OpenZFS challenger that will kill OpenZFS. Stop it. Setting up a SoftRAID array with NVMe cache using BTRFS is a f**king nightmare that requires me to remember the stupid long UUIDs of my disks!

      Everytime a new kernel comes out, every single rolling distro I use buckles and I can't log in because my /home is on the ZFS volume.

      1. classabbyamp

        Re: Catchup

        > Everytime a new kernel comes out, every single rolling distro I use buckles and I can't log in because my /home is on the ZFS volume.

        You might be interested in Void, as one of our criteria for updating the default kernel package is ZFS support

  2. abend0c4 Silver badge

    Thanks for this

    I've been using ZFS for some time - makes disk management significantly easier and IMHO rather more robust - but not on the root file system because, well, laziness I suppose: just a little too much trouble. Will give this a go.

    1. Anonymous Coward
      Anonymous Coward

      Re: Thanks for this

      I have had OpenZFS as root filesystem and it was easy peasy, but that was on FreeBSD. On Linux it might be bit harder, your best bet is probably a Linux distro that includes OpenZFS as standard.

    2. Anonymous Coward
      Anonymous Coward

      Re: Thanks for this

      I'vr had th same set of files with zero loss of data for over 10 y ars, a Ross multiple hardware upgrades disk upgrades and failures both normal (disk dying) and preventable (tripping over a bundle of Sara cables).

      Zfs has always recovered and just kept going like nothing has happened.

      It's easy to install these days, Debian just has packages for it.. not sure what the article is on about there (nobody has used the userspace version since the earliest versions, and the kernel modules were ok GitHub before the distributions started packaging them).

      I don't do zfs root mostly because the OS is easily replaceable anyway, and dedicating multiple drives to it seems overkill.

      1. John Brown (no body) Silver badge

        Re: Thanks for this

        On my home server, according to zpool history:

        2011-04-14.16:22:14 zpool create filestore raidz ada1 ada2 ada3 ada4

        Since then, all of the 2TB drives have been replaced, one at a time, with 4TB drives too, and the pool grown to suit once all were replaced. And the ZFS version/filesystem updated a few times. I'm happy with it, it's very robust.

        FreeBSD, of course. And that's been upgraded across both minor and major versions as well.

  3. firstnamebunchofnumbers

    ZFS... pls explain

    I have been using ZFS on-and-off for a number of years, mostly off in the last 2-3 to be fair.

    The thing I never quite got my head around is the concept of dropping a drive for a graceful replacement. With LVM I add a new PV to the VG then do a `pvmove` command to move the extents from the drive I'd like to replace to the new (possibly larger) drive. Then I can safely remove the old PV from the VG. Repeat that with all PVs then resize2fs.

    Could anyone describe a similar workflow with ZFS? I remember getting confused and concerned when adding the new drive to the zpool and the zpool size instantly grew to include it, rather than preparing to then remove an older drive. IIRC I ended up copying all the data off to another physical device then rebuilding the zvol/zpool or whatever and copying back to ZFS. I was probably just "holding it wrong" and need to RTFM but if anyone can point me to the way of doing this with ZFS?

    1. Brad Ackerman
      Boffin

      Re: ZFS... pls explain

      To enlarge a ZFS pool by replacing drives, you need to replace each drive in a zvol with a larger one. Assuming your zvol is composed of one or more 2-wide mirrors, you would add the new drive to a mirror, wait for resilvering to complete, drop one of the two existing drives from that mirror, and repeat for the other. Here, you created a new 1-wide mirror, which is indeed a pain to recover from and not an uncommon error (especially when attempting to add a cache disk).

      If you don't actually have any free drive bays, you can use an external dock to resilver the new drive or YOLO drop one of the existing mirror drives to add the new drive in its place.

      Dan Langille has written up this procedure on his blog.

    2. zuul

      Re: ZFS... pls explain

      Sounds like you used "zpool add" with a view to using "zpool remove", which would then evacuate the drive to be removed. You probably want to instead mirror the drive first using "zpool attach", then remove the old drive from the mirror with "zpool detach", possibly with a "zpool online -e" to expand the pool if bigger. Definitely RTFM first and monitor the resilver using zpool status in between.

      1. Anonymous Coward
        Anonymous Coward

        Re: ZFS... pls explain

        Yes, that's correct.

        You add it to the vdev, not to the pool - ie. as a mirrortof the vdev rather than the pool itself.

        It will then automatically rebuild the new disk from the existing disks in the vdev

    3. pwl

      Re: ZFS... pls explain

      you can do this but it was sufficiently long ago for me that i’ve forgotten the steps (i was doing it on solaris, to give a timescale)

      it’s something like - mirror to the new larger drive, remove the old drive, then change back to a single-plex mirror (ie : not mirrored). the pool magically grows to the new disk size.

      i recall it being magic enough that it just worked

    4. C R Mudgeon

      Re: ZFS... pls explain

      (I think you meant "vdev"; a zvol is something else.)

      It sounds as though you added the drive as a new vdev, intending to then remove the old vdev. The problem is, removing a vdev can only be done in some circumstances -- and for the longest time it was flat out impossible.

      It is possible, however -- and always has been -- to freely add and remove mirror drives *within* a mirrored vdev. So as others have said, the usual approach is to attach a new, larger drive to the vdev, making it an (N+1)-way mirror; wait for it to resilver; detach one of the old drives, making it N-way again; then repeat, one drive at time, until they've all been replaced. At that point, either the new space becomes available automatically or there's a command to do that (I forget which).

      If the vdev isn't mirrored, but just a single drive (*not* saying I recommend this), the procedure is the same; it temporarily becomes a 2-way mirror.

      RAID vdevs are a whole other story -- which I'm unable to tell since I've never used RAID.

      As for removing a vdev from the pool, that can only sometimes be done. If the pool contains a RAID vdev, then none of its vdevs can be removed. (Weird, but that's what the docs say.) Also, all of the vdevs must have the same sector size -- which also means the same ashift value. There's at least one other restriction, which I forget. So it's super easy to get yourself into a situation that's a lot of grief to get yourself back out of.

      The ashift thing bit me the last time I enlarged my mirror; hence the bold face :-/ I was able to recover using only the drives on hand, but it was a pain.

  4. Len
    Happy

    Also in TrueNAS

    The first beta of TrueNAS SCALE 23.10 also includes OpenZFS 2.2, though it's a beta and based on OpenZFS 2.2.0 RC3 so I would only use it for testing purposes at the moment.

    iXSystems, the company behind TrueNAS is a major contributor to the OpenZFS code and quite a few of the improvements in the latest OpenZFS are either done by iXSystems developers or by others but sponsored by iXSystems. I believe they were also behind getting the RAID expansion development back on track.

  5. milliemoo83

    Liberator

    So, would the 7th iteration of BLAKE3 be BLAKE7?

    1. that one in the corner Silver badge
      Coat

      Re: Liberator

      I 'ate you, Butler.

      Oh, sorry, that was a 7 not a Y.

      Mine's the one with the red trim disc, ta.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like