back to article TrueNAS CORE 13 is the end of the FreeBSD version

Bad news from BSD land – the oldest vendor of BSD systems is changing direction away from FreeBSD and toward Linux. NAS vendor iXsystems has been busy this year, but apart from some statements in online user communities, it hasn't been talking about the big news. Back in 2022, we covered TrueNAS CORE 13, the new release of its …

  1. Anonymous Coward
    Anonymous Coward

    Doesn't TrueNAS Scale work just as well on the HP Microservers?

    I'm a FreeBSD guy at heart and was a bit sad when I saw this coming a year ago or so. However, as you say, it makes sense from iXSystems' perspective.

    Fortunately, as the OpenZFS codebase is shared between Linux and FreeBSD, any contributions from iXSystems to OpenZFS (such as the fast-dedup you mention) will automatically surface in FreeBSD too. Possibly even before they end up in Linux because OpenZFS always needs to be slightly behind the latest Linux kernel.

    But, I don't see why having HP Microservers would be a problem if you wish to stay within the TrueNAS ecosystem. Surely TrueNAS Scale works just as well on the HP Microservers? It's just a one-time migration effort.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

      [Author here]

      > I don't see why having HP Microservers would be a problem if you wish to stay within the TrueNAS ecosystem

      Sir is missing the point. It's not the Microservers _qua_ Microservers. It's the difficulty and expense of putting more RAM in them.

      I have an N54L with 8GB, an N40L with 6GB, and a Gen 8 with 8GB. That is why I specified "with 22GB of RAM between them" in the article. 8+8+6 = 22.

      Of the 3, only the Gen 8 can be affordably upgraded, and even then it would be neither cheap nor easy.

      It would be considerably cheaper to replace the two older N-series Turion-based servers with newer models than to upgrade their RAM. RAM upgrades also require removal of the motherboards which means disconnecting all cabling: about a dozen cables, some tiny, some big, all difficult. And on the small island where I live, obtaining used replacement hardware involves expensive shipping, and a high probability of receiving damaged/broken kit. All 3 of these servers were bought used, and for all 3, I collected them in person.

      FreeBSD is if anything _more_ RAM-efficient than Linux. OpenZFS on Linux is _substantially_ less so because it requires two separate caches, the Linux kernel's page cache _plus_ the OpenZFS advanced read cache. These cannot be combined because ZFS cannot be merged into the Linux kernel due to a conflicting source code licence.

      1. Anonymous Coward
        Anonymous Coward

        Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

        [Original Coward here]. Ah, good point. I hadn't considered at all that Linux would require more RAM than FreeBSD.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

          Fair enough. I did try to spell it out, though:


          One of the problems with ZFS on Linux is that because it's not part of the kernel, its cache must remain separate from the Linux kernel's own cache,


          1. Bebu Silver badge

            Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

            《One of the problems with ZFS on Linux is that because it's not part of the kernel, its cache must remain separate from the Linux kernel's own cache,》

            That is curious.

            Once the zfs kernel modules are loaded they are part of the kernel which is pretty much the same as other file systems eg xfs I would think, unless mainline file systems get special per file system hooks in the cache code.

            My suspicion is that the caching services zfs requires is congruent with those provides by the freebsd kernel (not a surprise ;) but not a good match for those provided by the linux kernel (equally unsurprising) and consequently zfs on linux then requires a separate caching service or at least an extra layer or two built atop standard linux kernel services (possibly a slight oxymoron.)

            If bhyve based VMs were important and kvm were the future Joyent's SmartOS is another option. Could be a interesting project porting freenas to illumos. ;)

            1. ldo

              Re: the caching services zfs requires

              Yeah, the whole licensing issue sounded suspicious to me, too.

              The amount of RAM ZFS requires is just crazy. If it were managed in a more kernel-friendly way, like every other filesystem on Linux, then it could be easily freed up when regular apps ask for more RAM.

              As it is, it looks like ZFS is best run on a dedicated “storage appliance”—just don’t try to put any regular apps on the same system.

              ZFS is to filesystems what Java is to programming languages: it will happily chew up all the resources on your system, given half a chance.

              1. collinsl Bronze badge

                Re: the caching services zfs requires

                Disclosure: I run a ZFS on Linux based system at home as my primary server and as my server hosting my backups

                ZFS does indeed use a lot of RAM, but it's for performance rather than as a requirement (unless you turn on deduplication which needs tons of RAM on any system, regardless of filesystem).

                The ZFS cache is used both for selective read-ahead (predicting which files are to be accessed) and for caching the rest of a file currently being accessed (until the cache is needed again by non-ZFS processes or the file is replaced by a new file in the cache).

                ZFS can be run on systems with very little RAM however the performance will be much closer to that of the hard drives themselves rather than anything faster.

                If you want you can add ZFS Intent Log (ZIL) SSD drives to improve write performance however read performance will always rely on RAM to cache into. You can set how much cache you want ZFS to use (right now on my main server I have this set to about 50% of 128G so I have some VM space and on the backups server it's at 85% or so of 32G)

              2. This post has been deleted by its author

              3. Anonymous Coward
                Anonymous Coward

                Re: the caching services zfs requires

                Yeah, the whole licensing issue sounded suspicious to me, too.

                The ZFS licensing issue boils down to, mostly historic, bad faith between various camps. The story goes that SUN Microsystems wasn't a fan of the GPL and when they open sourced their ZFS implementation they allegedly went out of their way to do so under a license (the Common Development and Distribution License ( CDDL)) that was compatible with most open source licenses except GPL. That's why projects that use a license such as MIT, BSD, MPL (or even Oracle's later closed source ZFS implementation that was allegedly abandoned a few years ago) had no problem integrating it but GPL projects did.

                This has created some bad blood in parts of the Linux camp and to this day some will refuse to work with the OpenZFS community (now the main source of ZFS development since practically all others donated their code to the OpenZFS project). Linux kernel devs have blocked the use of some kernel hooks for use by OpenZFS (and other projects that are not GPL). It's an odd combination of old grudges, Not-Invented-Here Syndrome with only a tiny smidgen of legal and technical basis.

                A potential solution would be for the OpenZFS project to relicense their code under a BSD or MIT style license but that's easier said than done. There is still some original code from the SUN era in the codebase (though not that much as many current file and storage concepts, including SSDs, didn't exist fifteen years ago and so wasn't included in the SUN implementation) which would be hard to relicense. The main issue is that the last fifteen years of ZFS development have taken place in disparate communities, individuals, and private companies. Nearly all of them joined forces and donated their code to OpenZFS 2.0 four years ago but the license remained CDDL. It would be fairly easy to approach companies such Nexenta Systems, Delphix, iXSystems, Klara Systems, or the US Lawrence Livermore National Laboratory, and ask them if they are OK with the release of their OpenZFS code under BSD or MIT. It would, however, be almost impossible to reach out to hundreds of small time developers who once scratched their own itch and contributed small changes perhaps a decade ago.

                In short, I don't see relicensing OpenZFS code happen any time soon. It's probably not worth the trouble.

                The amount of RAM ZFS requires is just crazy. If it were managed in a more kernel-friendly way, like every other filesystem on Linux, then it could be easily freed up when regular apps ask for more RAM.

                As it is, it looks like ZFS is best run on a dedicated “storage appliance”—just don’t try to put any regular apps on the same system.

                ZFS is to filesystems what Java is to programming languages: it will happily chew up all the resources on your system, given half a chance.

                ZFS does not require a lot of RAM, that's a myth. I have many small VMs for server tasks that run on OpenZFS with 1GB of RAM and they work just fine. On boot OpenZFS (and all other ZFS implementations as far as I'm aware) assess how much RAM is available and tune accordingly. There are firewall appliances that have their OS on ZFS partitions. And ZFS implementations tend to only really eat up a lot of RAM if you enable dedup. Dedup is disabled by default on all ZFS implementations I know and if you're wondering whether you should enable dedup then the answer is very clearly NO. There is only a very specific use-case for dedup and if you are that use-case then you would already know, anyone else should leave it off.

              4. Justin Clift

                Re: the caching services zfs requires

                > The amount of RAM ZFS requires is just crazy.

                Hmmm, "requires" is probably too strong a word.

                The default memory settings that OpenZFS uses (especially for ARC max value) really do seem to be a case of "grab everything that's not nailed down".

                For stand alone appliances, that's probably ok. But for situations where it's supposed to be on the same boxes as other stuff... you're better off to rein it in.

                The two major sysctl settings for that (zfs_arc_min and zfs_arc_max) are easily set on the fly. To limit ARC to 2GB max use this:

                # echo 1073741824 > /sys/module/zfs/parameters/zfs_arc_min

                # echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max

                To set the values permanently, throw them into a new modprobe options file:

                # cat /etc/modprobe.d/zfs.conf

                options zfs zfs_arc_min=1073741824

                options zfs zfs_arc_max=2147483648

                They'll be loaded automatically at boot time from then on.

                1. K555

                  Re: the caching services zfs requires

                  The think I've always admired about TrueNAS/BSD is that, given I use it as a storage appliance, is it absolutely uses every shred of RAM for accelerating storage and I've never seen a hint of stability issues because of it. How much of that is down to TrueNAS and how much is inherent to it being ZFS on BSD, I don't actually know. But I get nervous around ZFS on Linux - it doesn't feel like an intrinsic part of the OS and it's memory management.

                  Barely-out-of-homebrew Linux based garbage products *ahem* *proxmox* just end up with the OOM stomping all over the place because an admin got slightly too excited with VM memory allocation and OZFS suddenly wanted to use a bunch of memory for cache.

                  Yes, you can tune this out (or just not use ZFS), but I don't really want to when I can just out-of-the-box TrueNAS.

                  I too have NL52 Microservers... and have been scouring ebay for the 16GB kits that sort-kinda-maybe work on them (unofficially).

                  1. ldo

                    Re: I get nervous around ZFS on Linux

                    Even Oracle does. Notice they don’t offer ZFS with their Linux offering? Instead they include btrfs. What kind of vote of confidence is that in their own filesystem?

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: I get nervous around ZFS on Linux

                      I wouldn't call ZFS Oracle's filesystem, they did very little work on it. Oracle inherited their ZFS implementation from Sun Microsystems which only supported Unix as it was built for Solaris. All the work on building a ZFS implementation that could work on Linux took place after the fork, mainly in the ZFS-on-Linux project.

                      The ZFS-on-Linux project then merged a few years ago with a bunch of other ZFS implementations into OpenZFS 2.0. Oracle's ZFS implementation wasn't part of that effort and the division within Oracle that used to work on ZFS was allegedly closed a few years ago.

                      This means that Oracle doesn't have a ZFS implementation that could run on Linux, they have the wrong (dead end) fork.

                      1. ldo

                        Re: Oracle doesn't have a ZFS implementation that could run on Linux

                        Oracle could run the same ZFS on Linux that everybody else is running. Why don’t they?

                        1. Justin Clift

                          Re: Oracle doesn't have a ZFS implementation that could run on Linux

                          > Oracle could run the same ZFS on Linux that everybody else is running. Why don’t they?


                  2. Justin Clift

                    Re: the caching services zfs requires

                    > Barely-out-of-homebrew Linux based garbage products *ahem* *proxmox* ...

                    What issues did you hit with Proxmox?

                    I've been testing it (often breaking it ;>) in my homelab recently, to see if it'd consider it reliable enough for real world use in a colo place.

                    1. K555

                      Re: the caching services zfs requires

                      I’ll try and illustrate my position on Proxmox as best possible. It’s a useful exercise to attempt to lay out the position I’m finding myself in as we too are evaluating it as a potential alternative to a small VMWare cluster setup for ourselves (3 / 4 hosts, 80 VMs) and it’s potential to live out on client sites and be maintained by all our engineers (yes, spurred on by Broadcom). And I’m not dead against it, but it often doesn’t do itself any favours.

                      There was some hyperbole on my part calling it ‘garbage’ – but it was a short and mildly humorous mention in the context of why I’m weary of ZFS on Linux. I run Linux distros on the five PCs that I use in and out of work and, potentially, some of them could present a ‘use case’ for ZFS (encryption, compression, volume management, cache vdevs) but I’ve never committed to it because it feels like it becomes more of an overhead to look after than it’s worth. And I mean feel – it’s the position I’ve come to from my exposure to the tech.

                      We actually wouldn’t run local ZFS on our own hypervisors anyway.

                      I’ll try to elaborate on some examples I’ve found. I accept that, in most cases, there would be the answer ‘but if you did x, y and z then it would be fine’ but, when it comes to rolling this out and maintaining it, this doesn’t fly for us. And this is my first problem with Proxmox, the attitude. If I call an (old) Land Rover Defender an unreliable somewhere on the internet, there’s a horde of Nevilles and Jeffs ready to tell me “they’re perfectly reliable if you strip them down, rewire them, fit this extensive list of after-market parts and, whilst you’re at it, you may as well go all the way and galvanise the chassis”. Proxmox too has an army of zealots and they create so much noise with “you’re doing it wrong!” that getting meaningful feedback (and possibly improving it as a product) is near impossible. I suspect the vast majority of them think Linus Tech Tips is a good source of information.

                      Having a bad community (or at least ‘crowded’ with the aforementioned zealots – I’m sure many or most users are fine! The FreeNAS forum has/had a user with an astonishingly high post count that posted far too often with a tone of ‘authority’ it took some time to learn to ignore!) isn’t the end of the world if there’s decent documentation. But the Proxmox documentation itself is poor enough that it actually has copy/paste information from the Archlinux wiki that’s not even been corrected to work in a typical Proxmox setup.

                      For example. I have systems with heavy memory over-contention by design. Many many people have managed to make Proxmox unstable by running the system memory low and/or forgetting ZFS behaviour when they first start to use it. They seek help and get met with ‘Well don’t do it! Stop being so cheap! What kind of provider sells more memory than they have!?’

                      If I’ve got 64 VMs that need 512MB active RAM when idle and 8GB of RAM when they’re in use, but they only get used every couple of months and rarely more than one or two at a time, having 512GB of RAM is ridiculous when I can sit comfortably in 64GB – it’s why we have page files (or vswap) and balloon drivers. This is a stable configuration on VMWare and it can be stable on KVM so it can defaintely work on Proxmox. If you get this ‘wrong’ on VMWare (e.g. the machines are more active than you planned, or you boot them all at once), you’ll slow it to a crawl. If you try it following a default Proxmox setup, you’ll just have Linux killing VM processes. Some people won’t be able to easily find out why.

                      Un-tuned ZFS makes this worse.

                      In any ‘early’ setup, a client had a single host, local ZFS storage, 16GB of RAM and two VMs allocated 8GB each. This was installed by just following the setup and letting it do it’s thing. Of course, one of the VMs would just shut off the moment there was high IO. Out of the box, ZFS was using 8gig and there was no page file configured. We adjusted ZFS down to use 4gig (to maintain SOME storage speed, this was a simple file server setup using large/slow drives), allocated the VMs 6GB each (still plenty for the job) and ran like that. This reduced the frequency of the killed VMs to ‘occasional’ from ‘constant’ but was still sailing too close to the wind.

                      So I pulled the host back in for a rebuild – set it back up again with ZRAM swap and some flash storage that could be used for additional swap and some ZIL/ARC, gave the VMs 16GB each just to stress test them. Runs fine in most circumstances (and is how I run some KVM machines on Debian without the involvement of Proxmox) but ZFS could still rear it’s head under super heavy IO. It would grab RAM faster than the system would page out, which is where I was coming at with saying that ZFS doesn’t feel like an integrated solution on Linux. As it stands, giving ZFS 2GB and the VMs 6GB each then turning the swappieness up has is working at a decent speed and stable. But, for example, if someone needed to run up a quick VM for some other purpose and didn’t think it all through, we’d be back to knocking the system over.

                      This is an older issue, so may have changed, but we also logged an improvement suggestion for the UI as the menu layout caught me and others out on occasion. There’s reset or restart for a VM. This is fairly usual for hypervisors, one it to have the gust OS restart itself and ‘reset’ kills the VM and start is again – like the reset switch. Fine. However, ‘reset’ was/is missing from one of the context menus (right click?) and it’s only on the button in one of the other panels. Every other option is in both menus, but NOT ‘reset’. It’s misleading. We logged this and the devs response was ‘but it’s in the other place, use that’. Crap.

                      Accidentally using the ‘restart’ button on a VM that’s guest OS is hung then leads to the hypervisor waiting for it to respond – which it won’t do because it’s hung. And then you can’t use the ‘reset’ option until that’s clear. To clear it on a cluster, you need to SSH into one of the hosts to locate and clear some lock file before you then kill off the process. It’s a silly little niggle in UI design that then sends you right back to the command line for 5 minutes.

                      Most of the above can be sorted by someone with some Linux/Hypervisor/Computing background but it just prevents me being confident handing an ISO to a generalised engineer and saying ‘go use this’ because the ‘default’ configuration you’ll end up with is poor enough that it’ll fall over. I still don't feel like I've come to a truly 'nice' setup yet.

                      When I find myself just going straight back to the command-line or manually configuring things on it, I wonder how much I’m really getting from using it in the first place. When I can run the same set of VMs up on my generic Ubuntu desktop PC 3 times over without having to tweak anything, why have I made my life harder with PVE? For some menus and graphs? I love being lazy and using an out of the box product, but this isn’t it.

                      To me, it’s currently not convenient. That’s a fine place to start for a beta and some homelab tinkering, but they’re selling the thing and calling it version 8. The current attitude from the devs and the user base (again, the loud ones) worries me it won’t be driven towards a bit of polish for some time.

                      Maybe I’m just commenting on a common FOSS problem. If something isn’t polished, there’s still that ‘that’s because you don’t know what you’re doing’ attitude from some. At least the Windows people out there are used to ‘yeah, it’s a bit crap like that’.

                      1. Justin Clift

                        Re: the caching services zfs requires

                        Cool, that's all well reasoned and gives food for thought. Thanks. :)

                        The setup I'm testing (small hyperconverged cluster using ceph for vm storage), there have definitely been some learning experiences.

                        It's still way too early in my testing/learning process to be comfortable rolling it out to production, as there are times I've gotten the underlying Ceph storage "stuck" or unresponsive... and ended up having to rebuild the cluster.

                        But, I'm rapidly getting better at understanding how the underlying pieces operate, then being able to unstick a cluster that's frozen (etc). So I'm hopeful it turn out to be workable. :)

                        Came across this situation of yours just yesterday, and found an easier solution:

                        <quote> Accidentally using the ‘restart’ button on a VM that’s guest OS is hung then leads to the hypervisor waiting for it to respond – which it won’t do because it’s hung. And then you can’t use the ‘reset’ option until that’s clear. To clear it on a cluster, you need to SSH into one of the hosts to locate and clear some lock file before you then kill off the process. It’s a silly little niggle in UI design that then sends you right back to the command line for 5 minutes.</quote>

                        I accidentally did the "restart" thing on a stuck VM yesterday too, which then promptly wedged and blocked subsequent operations.

                        (At least with Proxmox 8) double clicking the log entry for that initial wedged "restart" operation at the bottom of the proxmox gui opens a progress dialog where you can see it doing nothing. There's a "stop" button in that dialog.

                        Clicking that stop button (and giving it a few seconds), seems to correctly cancel the wedged restart job, allowing new actions (like a hard power off or whatever) to function.

                        That being said, I'm still pretty new to Proxmox and have only used Proxmox 8. No idea if that's a new thing or was just unreliable previously etc. :)

                        1. K555

                          Re: the caching services zfs requires

                          Good luck with the project.

                          I do feel like with enough tweaking, tinkering and understanding PVE could well turn out to be just the thing in a lot of scenarios so i'd be interested to know how it settles for you.

                          1. Justin Clift

                            Re: the caching services zfs requires

                            Sure, ping me your contact details via the email address in my GitHub profile and I can let you know. :)

                2. Graham Perrin

                  Re: the caching services zfs requires

                  Not just min and max. Better, please, familiarise yourselves with things such as:



                  – or their Linux equivalents.

                  1. Justin Clift

                    Re: the caching services zfs requires

                    Looking at those two, why are they better to use than the arc min and max values?

                    For a single purpose system (NAS or similar), I can sort of see why arc_sys_free would be useful.

                    On systems with multiple other applications though (and where ZFS performance is irrelevant), being able to set a maximum memory cap for ZFS ARC then never having to think about it again seems better.

                    Or is that not the full picture? :)

              5. Tridac

                Re: the caching services zfs requires

                ZFS witl use all the memory it can by default, but there are several tunables to limit that to defined amount. Not a problem really...

                1. Graham Perrin

                  Re: the caching services zfs requires

                  Not exactly all that it can, but it's a fair comment.

                  IIRC defaults on FreeBSD differ slightly from defaults on Linux.

            2. Anonymous Coward
              Anonymous Coward

              Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

              As I understand it the Adaptive Replacement Cache (ARC) algorithm and its sidekick L2ARC are OpenZFS' super power. It's at the heart of its performance, it's where a lot of the last fifteen years of development and improvement have taken place, and it's highly user-tunable to adapt to specific requirements (low RAM usage, or speed, or response time etc.). There's no desire from OpenZFS developers to hand that over to an underlying generic OS cache as they feel that would be a retrograde step.

              1. Graham Perrin

                Persistent removable L2ARC

                L2ARC is a minor miracle.

        2. kmoore134

          Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

          That is factually untrue. SCALE and CORE both pretty much have the same memory requirements. Linux isn't fundamentally heavier in that regard. Its all in what you run. If you are doing ZFS + SMB only, sure you can get away with less RAM. If you start piling on other things, of course you will use more memory.

      2. chris street

        Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

        You can easily buy ram for the N40L though secondhand, and not something that is going to be unreliable if you get it from the larger server parts companies. I was paying about £8 a stick for 8GB ones with ECC that will go in a microserver, certainly worth the upgrade. They are a PITA to get the motherboard out of though for sure.

        1. Tridac

          Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

          Have an N54l here for archive backup. Removed the blanking plate for the cdrom, bit of metal bashing. Fitted a cdrom sized supermicro 4 x 2.5 sas disk enclosure, a pcie sas controller and have 3 x 3.84 Tb ssds in a zfs pool + 1 x bsckup spare. The original 3.5" drive bays hava adapter plates fir the 2.5 FreeBSD system disk, mirrored pair.. Run nfs server from that and typically use rsync scripts to backup the main lab server, and other machines. Has 2 x 8Gb sticks and did tune the zfs memory usage parameters...

      3. Zola

        Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

        I recently upgraded an N54L to 16GB (2x 8GB) ECC RAM - just buy the known/recommended ECC RAM sticks (mine is Kingston, that I picked up for £40 the pair).

        I mostly us it for storing movies and documents/git repos in a home environment (SMB & NFS).

        It all runs great with TrueNAS Core and about 40TB of storage across 3x pools (2x RAIDZ-1, 1x mirrored & encrypted).

        I have the 4x internal HDD plus 4x SSD drives (in a 5.25" cage in the CD slot) all hanging off an LSI-9211-8i, and a 2x drive JBOD connected to the external SATA. Boots off an SSD connected to the optical motherboard SATA (and I still have the 4x SFF-8087 motherboard SATA ports available!)

        To be honest, changing the RAM was no big deal - it takes about 5-10 minutes to get the motherboard out and back (closer to 5 once you've done it a few times) and it's not really something you need to do often.

        I ran FreeNAS 9 on this for years, and made several customisations (MySQL jails etc.) which bit me in the arse as it massively complicates OS upgrades, so now I keep it simple and stick to the plain TrueNAS system and upgrades are a breeze. The MySQL etc. runs on an RPi4 instead (and is actually better for it).

        I'll probably stick with TrueNAS Core 13.x from here on out, but maybe keep an eye on any fork.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

          > just buy the known/recommended ECC RAM sticks

          I did. It didn't work.

          That's after half an hour of disassembly and reassembly.

          > (mine is Kingston, that I picked up for £40 the pair).

          Now there is the thing. For a decade-old server that cost me £90, I consider that 50% of the computer's price for more memory is excessive. Your mileage clearly varies substantially.

          For my herd of ageing Thinkpads, I generally pay about £5 for an 8GB module.

          1. Zola

            Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

            > I did. It didn't work.

            Sorry to hear that, but no idea. I did, and it worked. Ran Memtest86+ for a couple of days, all tests passed and has been working flawlessly ever since.

            I bought the Kingston KVR1333D3E9S/8G listed on this page (which I'm sure you know about). There's probably cheaper options, I just couldn't be arsed hunting them down.

            > Now there is the thing. For a decade-old server that cost me £90, I consider that 50% of the computer's price for more memory is excessive. Your mileage clearly varies substantially.

            Mine cost me £200, then I claimed the £100 cashback. That was about 13 years ago. Spending £40 on something that has served me so well in 13 years and still had more life left in it didn't seem like a bad deal.

            I also considered the alternatives, which would have meant a complete replacement (none of which would match the form factor) and there either were none or prohibitively expensive, so given all of that, £40 was an absolute bargain and no-brainer.

            1. hoola Silver badge

              Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

              Even if you look at a Microserver Gen8 the costs for base models are crazy when you consider how old they are.

              That is because the Gen 8 is such a great piece of kit. I still have 2 running stuff quite happily, both with 1265L-v2 and 16Gb.

              People are paying £300 to >£400 for one of this spec and and even the GT1610 with 4 GB is will over £100 on Flea Bay.

              Now compare that with any of the Gen10 models. The AMD's are challenging because the CPU is soldered. The Gen10 Plus is great if you can get one but expensive, again because of what you can do with it.

              I bought one for £450 with a 4 core Xeon and 32GB ram. Now add an 8 core/16 thread Xeon, swap the RAM to 64GB (It works), 208 Smart Array and you have a serious piece of hardware. It is very difficult to get the first generation for Gen10 plus anywhere. When they do appear but go really fast.

              There simply isn't anything comparable out there to these Microservers.

              1. Anonymous Coward
                Anonymous Coward

                Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

                > Now add an 8 core/16 thread Xeon ...

                Oh. Do the Gen10+ ones have a socketed cpu as well?

                I'd been ignoring them because I thought they were soldered to the board like the Gen 7's.

                That being said, how many drives can go in the things?

                My impression from the ServeTheHome review when they first came out was there's now a limit of 4 rather than 5 (gen 8) or 6 (gen 7).

                1. Justin Clift

                  Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

                  Yep, they're indeed a socketed cpu:

                  Wish I hadn't missed that before now, as I've seen a few of these models over the years. ;)

                  1. hoola Silver badge

                    Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

                    The Gen8 Microserver was always an oddity in terms of features and the unofficial upgrades.

                    The standard Gen10 in the same footprint was just a small, very quite box.

                    The Gen10 Plus is a different beast entirely and is much more aligned with the Gen8. Both are socketed CPUs although there are only 2 DIMM slots so for the Gen8 16GB is the hard limit and the Gen10, 32GB the official limit but 64GB is fine. Just get decent DIMMs

                    I used that site as a reference to sort out the CPU collating the power consumption with the Gen8 to opt for a cheaper (probably dodgy source) 8 core 16 thread CPU that is an Intel pre-release (also from Flea Bay). At £140 it was made the entire thing more viable.

      4. Justin Clift

        Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

        > ... only the Gen 8 can be affordably upgraded, and even then it would be neither cheap nor easy.

        That's kind of funny, as just yesterday I've ordered the upgrades for one of my Gen 8's:

        * ECC ram sticks - (2 of these)

        * CPU upgrade to E3-1265L V2 -

        Already upgraded one of my microservers (I have a bunch), and I'll probably do the rest in a few weeks too.

        The PCIe x8 slot (only gen 2 though) can even take a 10GbE card, or even dual port 2.5GbE ones if that's more your thing. :)

        1. hoola Silver badge

          Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

          Or put in a P410/420 raid card then you get proper RAID support across any OS at the hardware level.

          You can also use SAS disks then as well.

          If you do this you need a converter tray, there is a neat metal one that will do SAS to SATA (It is only middle of the connector that is different). The backplane actually supports SAS 3 1/2" directly if you have a controller.

        2. Justin Clift

          Re: Doesn't TrueNAS Scale work just as well on the HP Microservers?

          As a data point, the cpu and ram upgrades arrived yesterday. They're all working well.

          The ram is identifying itself as authentic HPE "Smart Memory" (whatever that means), it's passed an extended run in memcheck86+, and is now running real workloads without issue. :)

  2. Paul Crawford Silver badge

    Bad news indeed

    This is a bad move as far as I can see it for two reasons. The first is the obvious one that being able to boot the lot from a USB stick in a small server or adapted desktop is great for home and small office use, furthermore that is likely to get small businesses looking at paid-for support if they are using it and realise it has become business critical.

    Second reason is that I would rather have a different OS for my backup machine in case my main machines (almost all Linux) find themselves with their pants down one day due to a zero-day bug, one would hope that FeeeBSD would not (more so given their generally slow/negligible adoption of useless features that might compromise security).

    In my previous employment we had a ZFS based NAS using Solaris, but that turned out to be pretty awful due to Sun rushing it out, and Oracle being rubbish at fixing any of that once they took over. Perhaps as the key engineers had all left! So while I think ZFS is fantastic for this sort of a job due to integrity checks being built-in, and low-overhead snapshots against data corruption by ransomware, it seems hard to get a company who can wrap it in to a good product that is not prone to bloat and usurious licenses.

    1. katrinab Silver badge
      Thumb Up

      Re: Bad news indeed

      Agreed, especially when, on at least my HP Microserver, there is a USB port inside the case on the motherboard, which you can use as the boot device. Probably HP put it there with exactly that use-case in mind.

    2. kmoore134

      Re: Bad news indeed

      We've told users to avoid USB sticks for boot media for YEARS now, on CORE as well. The irony is that on SCALE USB is far better supported and you will likely have better results, even if we still don't recommend it :)

  3. Peter Gathercole Silver badge

    Some backtracking here, maybe

    Liam, you're previously proclaimed UNIX dead, and re-iterated this belief as recently as last week.

    I think you would be a little two-faced if you don't include FreeBSD in the group of systems that are under the UNIX banner.

    Suck it up. According to you Unix is dead and Linux is the new UNIX!

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Some backtracking here, maybe

      [Author here]

      > if you don't include FreeBSD in the group of systems that are under the UNIX banner.

      Nobody has paid for FreeBSD to be put through Open Group UNIX™ testing, therefore, FreeBSD is not a UNIX™ and never has been.

      At least 2 companies paid for this for Linux: Huawei and Inspur. There may be others: I do not have access to historical records.

      Legally speaking, BSD is not UNIX, as the FreeBSD project carefully spells out:

      Note that the BSDI company mentioned in that page is now known as iXsystems. This is the original company that productised and sold BSD/OS stepping away from BSD. I think that's very sad.

      1. Peter Gathercole Silver badge

        Re: Some backtracking here, maybe

        If you look back at my posts, the fact that BSD never took the UNIX branding is a point that I make frequently, but when I say that, I often get shouted down!

        I have often taken exception to your assertion that Huawei and Inspur putting their flavours of Linux through UNIXTM branding means that Linux in general has UNIXTM branding. It doesn't, because you don't know how much work Huawei and Inspur did to their versions to achieve their acceptance, nor do you know how much of that was accepted back into mainstream GNU/Linux. This is actually the other side of the double edged sword that is Free Software. People can change it to suit their needs.

        In my view Linux is not UNIX, and as I have said, is moving away from UNIX at an ever increasing rate, although in those two cases I accept that I may have to make a minor, and time-limited exception.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Some backtracking here, maybe

          Yes, I know and I do take your point.

          By and large, in my experience so far, the vendors of CentOS clones do not put a huge amount of R&D into their efforts. The thing that made the RHELatives desirable was their compatibility with RHEL. If a vendor were to make profound changes in order to pass a test, it wouldn't be CentOS compatible any more, and thus, they might as well have chosen something else, such as Debian.

          So, the real answer here is I don't know, and I am not going to go to great effort to find out, as both are historical now. The companies aren't going to tell me, the Open Group probably doesn't know, and it doesn't matter any more.

          The point is not "Linux is right now at this moment a Unix because it carries the Unix branding". It cannot be: it does not any more.

          The point is: "Linux passed in the past. It has done it before, and therefore, it is a certainty that given some effort it could do it again."

          I am sure any of the BSDs could as well, if anyone wanted to spend the time and effort... but nobody does.

          Xinuos might have the best case to do so, but I don't think it has enough money. I think its niche is small and not very lucrative.

          But it did happen, and a lot of people in this sector of this industry seem to be blind to this fact, which is why I keep repeating it.

          From the many fora and lists I inhabit, my strong impression is that most people still believe that "Unix" still means "based on AT&T code" and that nothing _not_ based on AT&T code can be a Unix. That has not been the case since very soon after the beginning of the public Linux project, and I think it's important to acknowledge this.

          Current OSes that bear the brand are macOS, AIX, Unixware, SCO OpenServer, HP/UX, and gods help us, z/OS.

          (Note: not Solaris.)

          But _OSes still in active development_ that bear or have born the brand are macOS and Huawei EulerOS/openEuler.

          AIX and HP/UX are in maintenance, as are the Xinuos products.

          Even their more current offering, OpenServer 10, is FreeBSD-based:

          And it's no longer mentioned on the homepage, or the products list. I think it's dead.

          For my money, there is a Venn diagram here, and the interesting part is is the intersection between "previously passed the testing" and "is in continued active development".

          That ellipse holds 2 editions of MacOS and 1 Linux distro.

        2. Tridac

          Re: Some backtracking here, maybe

          Agreed. As Linux gets ever more bloated and complex, with too many interdependencies, (systemd, anyone) it's about as far away form the oiginal unix idea as it's possible to be. Give me lean, lightweight and transparent, any day of the week...

      2. Anonymous Coward
        Anonymous Coward

        Re: Some backtracking here, maybe

        The BSD/OS software assets and BSDi name were sold to Wind River in and around 2000. iXsystems was the effectively the hardware division that was spun out on its own.

  4. l8gravely

    And what about the Clustered version?

    If I had known this was coming, I might not have spec'd out a TrueNAS M40 system for a client, which is an Active/Passive cluster pair from IxSystems. Sigh...

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: And what about the Clustered version?

      [Author here]

      > an Active/Passive cluster pair

      AFAICS, if you are clustering them, then you are therefore using TrueNAS SCALE, according to the company's own page:

      Scale is not based on FreeBSD. It is based on Debian. It is unaffected by this. You are running the product that has _replaced_ the product that this article is about.

      1. DougMac

        Re: And what about the Clustered version?

        iXSystems had failover clusters of Enterprise TrueNAS years before TrueNAS Scale was even thought of.

        1. Nate Amsden

          Re: And what about the Clustered version?

          For 3rd party TrueNAS HA support there is also which has built in TrueNAS support. I was going to use this last year then looked into TrueNAS more and realized I could not use it, as it did not appear to support any external fiberchannel storage with MPIO etc (as far as I could tell) and their newer hyperconverged stuff was even less likely to work. So instead I built a pair of Linux systems on refurb DL360Gen9s with Ubuntu 20 with the RSF-1 from that company. Support was good and quite cost effective. I think this is the same software that Nexenta used to use(maybe still does not sure) back when I tried to use them for HA about 12 years ago, though at the time it was "built into" the product I want to say I saw references to RSF-1 but maybe bad memory, apparently the tech is ~20+ years old.

          One caveat with RSF-1 is can't use NFS locking if you are exporting via NFS(so have to mount with -o nolock else you risk not being able to fail over gracefully). Learned that the hard way. My use case is purely NFS exports for low usage stuff but I wanted HA, though recently wrote some custom ZFS snapshot replication scripts for this NAS cluster to store backups from some other important systems for various periods of time by sending the snapshots to it, and the ZFS data for that is stored on a dedicated/isolated storage array.

          For home / "home lab" (I don't use that term for my own stuff, I have about 33 different systems between my home and my colo that I run), I do everything purely in Linux(and ESXi, though no vCenter etc), no FreeNAS/TrueNAS/appliances. My home "file server" is a 4U rackmount with Devuan linux, and at my colo my "file server" is a Terramaster 4 bay NAS with Devuan on a USB-connected external SSD.

  5. Jamie Jones Silver badge
    Thumb Up

    Good reporting

    Cheers Liam, an in-depth, and well balanced article.

  6. EvaQ

    I know Linux is the devil to *BSD users, but apart from that: what is bad about the move to Linux? Will people miss features?

    1. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      Features, probably not, no.

      But I think there are costs.

      • As I mentioned in the article as well as in the comments above, significantly poorer memory inefficiency, which renders the Linux-based product less suitable for small/low-end deployments and constrained hardware.

      • FreeBSD has a reputation for better stability than Linux, and fewer, less frequent, updates.

      • The FreeBSD hypervisor is small, efficient, clean and modern:

      It can also cope with handy features like memory overcommit -- you can assign more RAM to VMs than is available at that time -- and lazy commit, so they don't try to initialise all of it when they start.

      1. Anonymous Coward
        Anonymous Coward

        To which I will add

        the non-trivial learning curve for users that became familiar with the "under the hood" BSD operating system. "Hey just learn a totally different OS and tool chains and backup regime and networking stack and init and.. and.. and..."

        The company is shoving it's user base onto it's new code base without concern for THEM just itself. This is a huge red flag weather company realizes it or not. There may be an innocentish expatiation from the companies side, most likely the drag and expense of maintaining feature parity and compatibility between code bases built for two incomparable operating systems. That's a lot of Dev and QC time, and if there are large paying customers looking for KVM support, it may look like a clear eyed decision for the company.

        The problem is that their existing user base isn't the one benefiting there, it's the developers and their shareholders/owners. Some of their user-base is probably indifferent and already knows both platforms.

        The ones that are left will howl eventually. "But it won't run on my outdated hardware" isn't ever going to carry weight in these decisions though.

        "It will break large production systems that require deeper integration than just the file system" might catch an ear. "we spend enough on storage to pick up the fork and spin it out as a more agile competitor to you in five years" would give them pause if it was credible. Wailing about old hardware support, while not wrong is isn't going to move the needle. In the end their silence may be mercy, :-)

        they could offer you 1000 Dogecoin and tell you to hit eBay for some newer obsolete hardware to save from the landfill.

        (Seriously though, not judging, still got a 25 drive HP Storageworks array running 25 feet from my desk.)

    2. Dante Colò

      I think the question ir not be bad or good , if there is a real reason to move to Linux or just a personal preference of someone from Ixsystems.

      1. Graham Perrin


        It's not one person's preference. Commentary in Reddit is worth reading; Liam has provided highlights.

      2. Anonymous Coward
        Anonymous Coward

        Broader hardware / driver support is a major reason

      3. Anonymous Coward
        Anonymous Coward

        Good explanation and discussion from head of TrueNAS project on the forums

  7. Dante Colò

    I guess this should a decision from someone influent from Ixsystems that simply wants Linux instead. What's the real reason for that switch?

    1. DougMac

      My personal opinion on the change to Linux are

      a) the devs/users really like running side applications on their NAS boxes. The older jails/bhyve setups cost them considerable development time to maintain the side applications. Replacing that with Linux containers, make their development/maintenance time to be much lower, especially with some opensource apps only being released as containerized apps.

      b) Now that OpenZFS exists, and is plugged easily into many Linuxes, they can put more development into OpenZFS rather than trying to deal with various different import dates of ZFS code into FreeBSD releases, and redoing work that FreeBSD "reverted".

    2. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > What's the real reason for that switch?

      I can answer that in three characters.

      The letter "K", the digit "8", and the letter "s".

      1. Anonymous Coward
        Anonymous Coward

        "sk8" ?

        Storage on ice, or maybe blades? ;-)

        I'll just get my coat....

  8. may_i

    Troubling developments

    I've been running TrueNAS for a long time on the various hardware that has hosted my NAS over the years. The NAS still has 'freenas' as its host name, and it has been rock solid through the years.

    I only run two jails on it - one to share out my media library through Jellyfin and one which runs URBackup to keep my work and personal machines backed up.

    I see zero benefit in running my NAS on Linux and having to transition a stable configuration to something which provides zero benefit for me compared to what I'm running now. All the experimental virtual machines, containers and one-off experiments run on my Proxmox cluster where they can't damage anything and benefit from the flexibility needed for that use case.

    It's sad to see iX leaving their FreeBSD product to wither and die and while the new SCALE version may gain them some buzz and hype for the next few years, I can't help but think they are doing themselves no favours by abandoning the stability and reliability that they have in the FreeBSD code base. The general sentiment on their forums, from the users who have been with them since the freenas days, is far from positive.

    1. Anonymous Coward
      Anonymous Coward

      Re: Troubling developments

      Yeah, they should read the room on that front and done a better job managing the transition than telling a bunch of storage people to embrace the suck and dive into an unfamiliar, disruptive and unwelcome change.

      Storage people have 60% angry badger DNA and don't respond well to having their fur rubbed the wrong way.

    2. Woodnag

      Re: Troubling developments

      My FreeNAS (never upgraded to TrueNAS) runs a treat, but the two mirrored USBs with FreeNAS keep crapping out as USB sticks get unreliable with MLC. Annoying to repeat a scrub and get more corrected errors. So I see why going to SSD was chosen.

  9. im.thatoneguy

    iXSystems announced that the RAM limitations on Linux for ZFS ARC will be going away in the next release. Other than that I think if some of the last major BSD developers are choosing to move away then they probably are doing it confidently. It's not like there aren't Linux systems running on iot microcontrollers. So you can't say that Linux can't scale down. In fact most small NAS systems on the market run Linux, I don't know of a single commercial 2-4GB RAM NAS running BSD except for the ix mini.

    Worrying about the pace of kernel updates on Linux seems to be somewhat moot as well since iXSystems controls their distro so they can pick and choose the pace that Scale adopts updates.

    1. kmoore134

      We follow the latest LTS kernel releases for SCALE.

  10. ldo

    Limited Exposure To BSD ...

    I have a client making heavy use of pfSense, which is built on FreeBSD. This is developed by Netgate, who I believe are the primary sponsors of FreeBSD development. Compared to Linux, I find some odd quirks. Like the fact that the “route” command cannot actually list your routing table, like on Linux: you have to use “netstat -r” for that.

    Also their use of PHP for the GUI limits some of the things it can do. But that’s another story ...

    1. FIA Silver badge

      Re: Limited Exposure To BSD ...

      OPNSense might be worth a look. It's a (distant) fork of pfSense.

      1. ldo

        Re: OPNSense might be worth a look

        Had a look at their docs. My ears pricked up at the mention of Python, but that’s used only for some backend service, they are still sticking with PHP for the frontend. I find that oddly limiting.

        1. Anonymous Coward
          Anonymous Coward

          Re: OPNSense might be worth a look

          Why? I don't personally use PHP, but what is it that python code can do which php can't?

          (By the way, you still mean backend. PHP and python would be driving the back end of the html servers.)

          Your frontend would be HTML, CSS, JS etc.


          1. ldo

            Re: what is it that python code can do which php can't?

            In a word: WebSockets.

            Here’s an example I came across in the pfSense API: there is a “ping” function under the diagnostics menu. You type in the address you want to ping, hit the “Start” button, and ... nothing happens. You are supposed to wait a few seconds for sufficient pings to go out, then click “Stop” and it will display the results.

            I find this clunky. A more natural UI would give a running progress display, outputting the response from each ping as soon as it comes back—just like doing a ping from the command line works, in other words.

            But to do this requires an ongoing connection back to the browser, to return those real-time results as a message stream. The only good way I know of to do this is via WebSockets. Python can handle this via its “ASGI” web-server architecture, supported by the major Python frameworks. This deals with both regular HTTP connections and WebSockets in a common foundation, based on async/await constructs and the asyncio event-handling framework.

            PHP has no equivalent.

            1. Anonymous Coward
              Anonymous Coward

              Re: what is it that python code can do which php can't?

              .Ahhh ok, fair point. As I said, I don't use PHP, if that's the case, it would indeed be an issue.

    2. Anonymous Coward
      Anonymous Coward

      Re: Limited Exposure To BSD ...

      No, it's standard to have different commands to set something than to show something.

      netstat -r to list the routing tables has been the norm since the very first IP on Unix systems.

      Therefore, the odd quirk is the dont-care-about-unix linux system.

      Tell me, what's the flavor of the ADHD linux users month these days? Is it ifconfig, or ip today? devfs or udev? oss or alsa?

      Don't get me wrong. Sticking to something just because "it was always that way" is bad.

      Changing things for no reason, other than it's shiny-shiny cool is the "quirk" in the system.

      1. Teal Bee

        Re: Limited Exposure To BSD ...

        >Is it ifconfig, or ip today? devfs or udev? oss or alsa?

        ip, udev, and alsa have been around for 24, 20, and 25 year respectively.

        If anything, Linux changes at a glacial pace.

        1. Anonymous Coward
          Anonymous Coward

          Re: Limited Exposure To BSD ...

          You're being pedantic, and are missing the point of his/her post.

          1. ldo

            Re: and are missing the point of his/her post.

            Given that the post in question started out with a pretty lame defence against the point I was making, then tried to shift to an equally lame and irrelevant counterattack against Linux, I would say it wasn’t “missing the point”, it was “trying to be polite”.

  11. IvyKing

    Reminds of an old Slashdot Troll

    BSD is dying, Netcraft confirms it. OTOH, MacOS has a primarily BSD userland, which certainly helps MacPorts and Homebrew maintain a large selection of open source software for the Mac.

    I also feel a bit of nostalgia about the one of the original distributors of BSD phasing out BSD. One selling point for BSD was pointing out that the Walnut Creek server could keep a 100Mbps link saturated while running on a 386. Another reason for feeling nostalgic is that I probably passed most of the original BSD developers at on time or another in my years at Cal. To top it off, I just read that the last of the original BART cars will be making their final run next month.

  12. thondwe

    BSD death by a thousand linux migrations?

    Every time a BSD based product loses to Linux, it becomes a little more unappealing - support for new hardware is likely to be a ongoing issue. e.g. Graphic cards drivers for Plex/Jellyfin encoding I'd guess might be something disappearing as a result of TrueNAS moving to Linux? pfsense presumably focuses on Netgate tin, likewise Juniper and Apple (M* chips now)

    I'm sort of expecting OpnSense to be looking seriously at a Linux switch as a final step to separation from PfSense??

    1. ldo

      Re: OpnSense to be looking seriously at a Linux switch

      I have heard it said that the pfSense developers went with FreeBSD originally because it had the superior network stack to Linux at the time.

      Nowadays it is Linux that has the more advanced network stack, and even the developers admit that. So yes, there will likely a switch to Linux at some point, as painful as it will be. Or more likely a fork, with the usual (small) cohort of diehards stubbornly sticking to the old way of doing things.

    2. Anonymous Coward
      Anonymous Coward

      Re: BSD death by a thousand linux migrations?

      In your comment, replace BSD with Linux, and Linux with Windows, and then realise what a foolish and defeatist statement that is.

  13. Mockup1974 Bronze badge

    FreeBSD is dying...

    The situation for other Unices like Illumos, NetBSD and OpenBSD is even bleaker.

    1. BinkyTheMagicPaperclip Silver badge

      I'm not sure how you reach that conclusion unless your only criteria is 'it has to be usable as a desktop'.

      FreeBSD, I could agree needs more resource. Currently attempting to use FreeBSD to switch away from Windows and boy is there a *gulf* between FreeBSD and either Windows or Linux. Not to mention Firefox breaking in latest ports (and quarterly if I remember), VScode breaking, WINE breaking the 32 bit port on 64 bit platforms (fixed in December). Then, once you run WINE, the API coverage is substantially below that under Linux - which shouldn't be a surprise really, but it's disappointing.

      On the bright side ZFS is still great, and bhyve is 'good enough' if still rather bleeding edge and not really average end user turnkey. There's enough functionality and compatibility to achieve most aims and look at it improving in the future.

      Illumos - has it ever really hit popularity, I'll give you that one.

      NetBSD is the same it has been for a long time. A very slow increase in base functionality, extremely varied per platform support, some gems among a lot of cruft. It's an OS for research, tinkering, and embedded systems, not a general purpose OS unless you enjoy a great deal of pain and are prepared to hack a lot.

      OpenBSD is the same, it maintains its focus on security and a userland that is readily accessible from the command line. Provided you fit its use case, it's a pleasure most of the time.

    2. ldo

      BSDs Versus Linux

      There are maybe half a dozen BSD variants currently alive, versus maybe 50× that number of Linux distros. Yet it is easier to switch between Linux distros than it is to switch between BSD variants.

      Those Linux distros can be more different from each other than the BSD variants. So Linux manages to offer a great deal of variety with little fragmentation, while the BSDs offer more fragmentation and less variety.

      1. Anonymous Coward
        Anonymous Coward

        Re: BSDs Versus Linux

        Linux fanboi doesn't know what he's talking about. Linux userland is horribly fragmented.

        You are one of the "shiny shiny" people who are the cause of all the software and OS woes these days.

        Of course, you bright young things know everything, and all the old hippies who have been there, done that, are living in the past, right?

        Those who ignore history are doomed..... etc.

        1. ldo

          Re: Linux userland is horribly fragmented.

          I can run multiple Linux userlands under the same kernel, using containers. On my Debian system, in the /usr/share/lxc/templates/ directory, I see over 2 dozen entries, for things like Alpine, Devuan (!), Fedora, Kali,OpenSuse, Slackware, Ubuntu and loads of others. Even, shall we say, “fringe” Linuces like Android and ChromeOS still run the same kernel.

          Try mixing your BSDs in the same way, and let us know how far you get.

          1. Anonymous Coward
            Anonymous Coward

            Re: Linux userland is horribly fragmented.

            It's the same OS.

            Your comment is as irrelevant as saying I could run ghostbsd, midnightbsd, pfsense, opnsense, bsdrp, starbsd etc. the same way.

            Oh, and you couldn't run ChromeOS or Android like that - the Kernels are too different.

            Incidentally, FreeBSD runs Linux binaries. I could do the same thing you describe, but I don't need to - I use a sane OS with no stupid and illogical fragmentations.

            As an aside, if you try compiling *different OS* BSD source code on another BSD, you'll have far more success than attempting to do it under the ever changing linux KPI/API.

            And I bet when MS was more dominant, you were one of those people who whined about MS not following standards, whilst keeping strangely quiet when Linux (mainly userland) does the same

            1. ldo

              Re: It's the same OS.

              You’re agreeing with my point.

            2. ldo

              Re: It's the same OS.

              And yes, you can run Android in a container—Waydroid does exactly that.

  14. osxtra

    Much Ado

    To me, this is old news. Went from Core to Scale on a home-built server, first installed a few years ago. Used ECC so ZFS would be "happy". Went with what the board would hold (128G) in case I wanted to run other apps (Plex server, etc.).

    Both Core and Scale are pretty well locked down. You generally can't add thingss; don't even get screen (but honestly, tmux is fine too, just not what I was used to).

    The upgrade from Core to Scale was painless.

    Not sure I'd want my main backup system booting from a stick. It's on a small nVME (cloned just in case for disaster recovery) with a handful of fairly large spinning rust drives holding the data. The most "perishable" data gets again backed up offsite. Movies etc. I can always rip again from the original media. Hopefully. ;)

    Put in some used SolarFlare 10G fiber cards for quick(er) LAN connectivity to the various other boxes.

    So far as BSD vs Debian, bits is bits; don't really have an opinion on the underlying OS, so long as it works, and won't bite me in the license.

    1. ldo

      Re: Not sure I'd want my main backup system booting from a stick

      Why not? It’s only the OS, so if it fails, you just set up a new stick with a new copy of the OS. All your important data is on the internal storage.

  15. Tridac

    Never seen the need to use Freenas or any other such dedicated system like that. Takes about 20-30 minutes to install FreeBSD and the usual tweaks, setup ntp etc, then run NFS and Samba for the server functions. Updates are trivial and the system is simple to maintain otherwise. So, why would I need Frenas etc ?...

  16. Anonymous Coward
    Anonymous Coward

    Out of interest what would happen if you replaced the 13-1 kernel and runtime shared libraries with 14?

  17. pjlaskey

    Bad News Indeed

    I am a long-term FreeNAS/TrueNAS user. Last year I made the switch to SCALE and began a six-months-long hellish ordeal that eventually led me to wipe my entire NAS and start from scratch with CORE. The SCALE learning curve was steep but I was all-in on adopting the new technology. I should say that I have been a Linux user since Slackware in the 90's, so I thought SCALE would feel like home. What I found was roadblock after roadblock. I quickly achieved a certain level of proficiency with Docker and did a deep dive into installing apps and setting up ingress/egress. I found that the main source of mature apps is TrueCharts. While I liked the number of choices, I found the Charts to be fussy and highly unstable. Some apps received regular breaking changes. At one point, 75% of my apps were affected by breaking changes at the same time. Add to this, the support staff is user-antagonistic at best. All that said, I was able to get most things running that I needed. Then SCALE switched from Docker to K8S and, at 6 months in, I didn't have the time or interest in learning a new container system. Finally, I discovered that SCALE was deprecating and removing other functionality that I had come to rely on. First was the ability to mount NTFS filesystems, which I know is a bad idea but I was using it to import filed from USB storage as a readonly source. The second was the removal of the OpenVPN service. I rely on tis daily to access my network from afar. I know I could stand up a VM or maybe a K8S container to host the VPN but that has it's own networking challenges and seems less efficient. Also, I had already invested 6 months into this and had to draw the line.

    I was re-attracted to the maturity and stability of CORE so I decided to do a once-in-a-decade NAS wipe and start from scratch. Since reverting to CORE, life has been smooth sailing. I'd like to say that I will never look back, but I might not have a choice...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like