
The SFC can kiss my taint...
2-clause BSD all the way, baby!
Canonical is expanding Ubuntu's support for ZFS, an advanced file system originally developed by Sun Microsystems. Ubuntu's support is based on the ZFS on Linux project, which itself is based on code ported from OpenSolaris, Sun's open-source operating system. It is licensed under Sun's Common Development and Distribution …
Why does software need a license? If you're publishing the source code on the internet then you've already lost control of how it's used.
All licenses do is make money for lawyers.
(And provide yet another category of pointless things for geeks to argue about)
That was kind of the point of the BSD license - to write really good software so that it's used everywhere - for the good of everyone.
If the internet had been born in a GPL world, it would have never taken off, as no-one would have been able to advance their products without fragmenting the code, or not being able to earn a living.
Tom.
FreeBSD project is constantly releasing software under BSD license while Apple is a notorious consumer of BSD licensed software. Please verify how well the two are able to earn a living and come back to tell us.
GPL is nothing but a BSD with fangs intended to bite greedy corporations/developers. To put it in simple terms, GPL is like BSD with one restriction: you are not allowed to attach any more restrictions. Just give it away the same way you received it. Even Microsoft has come to understand this and has no problem with it.
As for fragmenting code, it is preferable compared to being locked-in into bad code.
Internet was born in a BSD world but GPL fueled its tremendous growth.
> Apple is a notorious consumer of BSD licensed software.
And thank zot they are; gives the non-expert access to a Posix userland. And ports.
Of course, it is outrageous what they charge for the OS and the free upgrades! I can't count what I've spent in the years since, keeping my 2011 unit up-to-date.
Meanwhile, the BSD folks keep working away garnering support and contributions based on the quality and utility of the results, not enforced by licence terms.
GPL is great for stand-alone applications, but its tainting issues (hence the LGPL), make it a difficult choice for tightly integrated systems like kernels and OSs. I'm sure when the Hurd arrives, it'll all be fine.
Re: Apple and FreeBSD
"Please verify how well the two are able to earn a living and come back to tell us."
From my perspective, both seem to be alive and kicking
I've been using ZFS on FreeBSD for several years now. My 2 workstations are both "entirely ZFS" on bootup, and the server is UFS for the OS, ZFS for the data. I did that for a few reasons, making recovery a little easier being one of them.
ZFS tweakage isn't that hard. My server has 4G RAM on it. Works just fine with ZFS. you just tweek some of the memory footprint variables a bit to compensate. Slight reduction in file system performance, but reliability replication compression etc. are all working fine. But yeah an un-tweeked ZFS file system will consume 8Gb of RAM if it gets a chance.
"I've been using ZFS on FreeBSD for several years now."
Why? How do you benefit from it? I honestly see zero actual utility in running ZFS on a couple home desktop machines and an (undoubtedly smallish) home server.
Note that "because I can" is a valid answer in my mind ... As is "because I want to". But is there any other reason? Does it actually do anything that you need in such a small system that isn't handled equally well with simpler, easier, less complicated tools?
I use ZFS on Linux but my rationale might be the same - I use ZFS because I value my data and I value my money. The first should go without saying but the second requires some explanation.
I need to be able to use RAID5 - RAID1 doesn't cut it in terms of how many droves I need. This means Btrfs is not an option. I am a huge fan of Btrfs because it offers some features that ZFS doesn't (also the other around of course) but I don't trust its RAID 5/6 implementation. Wheres ZFS is rock solid.
Then there's the caching support - I can configure an SSD as L2ARC cache without much hassle and have my huge (raltively) HDD array perform as well as an SSD after the cache has been warmed up (caveat emptor - on Linux the cache starts from zero after reboot)l. But with ZFS you can configure the cachine behavior per dataset (whether the ARC and/L2ARC cache metadata, metadata + data or none).
I have a friend who runs an 8-disk RAID-Z1 array with a 9th as hot spare and 6 Optane drives for cache. I did set it up for him and it only took like 5 min. Generally, ZFS is very easy to set up and maintain once you've wrapped your head around it. And the flexibility is great. As long as you don't have to shrink your zpool...
BTW, there are many important ZFS feature additions in the pipeline. I think adding a drive to a RAID set is one of them.
ZFS is filesystem, disk and archive/backup management wrapped up in one coherent and high performance package. Its like comparing a hand crank car to one with a start button.
What are these simpler, less complex tools? For every task I can think of for disk, filesystem, archival or backup task, the zfs way of doing it is simpler and requires only one tool. Even to get close to ZFS features under linux without ZFS you would have to use multiple complex tools.
Specifically for desktops, the ability to take snapshots for free - basically no compute or disk cost - allows you to take and discard thousands of snapshots. Oracle have for a while had a GNOME extension that acts as a time slider on the directory being viewed, and there exist tools for linux to do similar - although currently without the GUI integration that Oracle have done.
Once it starts making it in to more desktop installs, features like that will start to come more rapidly.
ZFS is able to roll back to previous snapshots. If an OS upgrade does not work properly and the root is on ZFS, then the whole upgrade can be rolled back.
ZFS includes several types of checksums, including sha256, which can be set at any time. Every byte written to storage will be covered by a checksum, and you can "scrub" your storage to verify that everything on it is correct.
ZFS includes several types of compression. This compression can be adjusted dynamically at any time.
ZFS has a raid5 implementation that closes the "write hole," and can be safely used without battery backup.
All of us need storage that is efficient and correct. This is not delivered as well on older filesystems (EXT2/3/4, XFS, NTFS, FFS).
BtrFS delivers some of this (it does not have a reliable raid5); it does deliver defrag, which ZFS does not.
ZFS is, however, the best file system for a number of uses, some of which work well in a home/personal environment. Microsoft is reimplementing some ZFS features into ReFS, and that will be widely deployed at some point as I understand it.
Internet was born in a BSD world but GPL fueled its tremendous growth. ... AC
And Drivers Futures in Easily Virtually Visualised Realities ..... for Mass Mogul Media Presentations, AC, is the Question posing here.
With New IMProbable News* Impossible to Deny be Almighty True and Perfectly Correct, is Lead Changed and Seeded to Others?
And whenever that is an Almighty Few is Absolute Command and Control Cracked and Wide Open for Commandeering ..... with AIMaster Pilot ProgramMING Testing of Global Operating Device Devices Protections Installed.
Such Permits an Almighty Few Every Opportunity to Lead with Worthy Merit Due Distinction for All Free AIdDevelopment Assistance Granted AIMaster Pilot ProgramMING Testing in Live Operational Virtual Environment Fields.
:-) What's not to Like and Love to Distraction and Attraction? Is that the Powerful Current Rush your Crushes and Crashes are All Missing?
* NEUKlearer HyperRadioProACTive IT
Internet was born in a BSD world but GPL fueled its tremendous growth.
The GPL does not advance the growth of the internet. No standards or protocols that published only GPL'd implementations have ever become widespread. Meanwhile, those with freer licensed code often do. Only rsyncd has come close. Try to name any. Meanwhile in the BSD and MIT/X camp there is everything from DNS to HTTP, SSH to NFS, NTP to SSL. You can even look to multimedia formats like MP4, JPEG, etc.
The necessity of interoperability with proprietary software is a fact of life. It's delusional to pretend otherwise. And GPL'd software is effectively proprietary, too, as once you've integrated more-liberally licensed code, you can't contribute changes back upstream again (unless the whose code-base is dual-licensed).
The problem with the BSD licence is that it doesn't oblige anyone downstream to distribute Source Code. This means someone could take your BSD-licenced code, make a slight change to make it incompatible with your original, cage it up and turn it into a proprietary product.
The GPL protects against this, by saying unequivocally: Not sharing is stealing.
Now I'll have to explain to even more CTOs (and Boards) why, exactly, their favorite nephew is directly responsible for the complete slowdown of the entire corporate infrastructure[0] ...and that no, the ability to install Ubuntu does not make said nephew a sysadmin, much less a systems architect.
Ah, well. I guess it's a living.
[0] Built on low-end hardware with minimal RAM, as such nephew-built systems usually are ...
Errr...I used to run Citrix (Metaframe...those were the daysies) on a P4 for 50 users and that had a single GB of memory after I upgraded it from 512MB. The whole point of Citrix and RDS is that you can use fewer resources than VDI. If you're assigning 2+GB per user you're doing remote desktop wrong or are using it for the wrong user population. It's very good for consistent user types, it's very bad for random desktop user scenarios with loads of different apps.
Considering the eye-watering memory recommendations that FreeNAS makes regarding using their software on a headless server, it's surprising that anyone would use it as the boot FS for a full-graphical workstation. I mean, it does really cool stuff, but most of that seems like overkill for a workstation. And even if Canonical is turning-off or turning-down a lot of the high-flying stuff in ZFS, it's still gotta be gobbling RAM like it's Christmas morning.
Its nice to have it on your root partition as a way of reverting any changes that break stuff.
SUSE has been defaulting to BTRFS for / for a while for the same reason. That runs in a reasonable amount of RAM, you just need a lot of spare space on the / partition. I cannot imagine ZFS will be much worse.
Stock FreeBSD works absolutely fine with 6GB of RAM for 4 x 10TB Iron Woolves.I get about 230MB/s transfer speeds, which is double what my gigabit network can cope with. I think I could probably go down to 4GB without significant performance loss, but I haven't tested it. I have virtual machines that perform other duties running in virtual machines with 384MB of allocated ram and 32GB of storage, and they work absolutely fine.
I can tell you never tried it. Memory requirements are no greater than any other file system. I have it running on ARM machines with 256-512MB of RAM just fine.
You only need tons of RAM if you use deduplication - and if you are using deduplication you better make damn sure you know what you are doing because if you merely _think_ you know what you are doing you will find yourself buying more disks and restoring from a backup as soon as your data size grows to a point where restoring from a backup is inconvenient.
DreamPlugs(*), Raspberry Pis and Toshiba AC100 laptops on the low end. RedSleeve 7 on <= ARMv6, CentOS 7 on >= ARMv7.
CentOS 7 on big stuff (aarch64 and x86-64).
(*) zfs-fuse on ARMv5 due to a really obscure issue when building kernel based ZoL for armv5tel, kernel based ZoL on ARMv6+.
Depends on the exact use case. ZFS performs better than most file systems on relatively dumb flash devices like SD cards with a bit of tuning. If you get an A1 or A2 rated card, the performance even on random writes is actually pretty decent.
Of course there is no reason you couldn't hang a SATA disk off it with a USB-SATA adapter if you need it for some intensive workload.
Considering the eye-watering memory recommendations that FreeNAS makes regarding using their software on a headless server, it's surprising that anyone would use it as the boot FS for a full-graphical workstation.
FreeNAS rather assumes it will be used for an industrial grade file server with loads of clients. I have an ancient (2007, when ZFS first arrived in FreeBSD) ZFS fileserver at home with 4GB of memory and a Pentium E2200 CPU @2.2GHz and it works well enough for our use. Similarly my world facing web/mail server is a Gigabyte Brix with 4GB and a Celeron N3000 @1.04/burst 2.08GHz using ZFS on an SSD. The trick is to reduce the maximum ARC size to about half your real memory (or less), the default eats a ridiculous 7/8ths which causes thrashing unless your memory is large enough for 1/8th to hold the OS and running programs. Oh, and don't enable dedup which really does require 1GB RAM per 1TB disk.
As for graphical workstations, the machine I'm typing this on has its boot zpool on an SSD and a second pool on spinning rust, but it's an 8-core 4GHz CPU with 32GB of memory, max ARC 16GB.
As far as I'm concerned the best feature of ZFS is no more fsck after power cuts and the 2nd best is backing up by ZFS snapshots.
"the best feature of ZFS is no more fsck after power cuts"
If you have that much of a problem with your power, I respectfully suggest that changing your file system is not the answer. What you really need is a UPS.
"the 2nd best is backing up by ZFS snapshots."
Snapshots are not backups, not by a long stretch.
"Snapshots are not backups, not by a long stretch."
But due to send/receive they make for a great backup solution whereby you send the minimal changes between snapshots to a remote site.
Exactly. I was daft enough to assume people would know about this capability and so would understand "backing up by snapshot". I have ZFS snapshots (base + incremental) mirrored in house and off site. The on machine snapshots also deal with the "oh bugger, deleted the wrong file" problem as well.
ZFS has a very good RAM cache. It doesn't need the RAM but it really helps with volumes containing a large number of files, like a NAS.
I'm trying ZFS caching of spinning rust RAID with NMVe and that seems to be pointless by design. Hopefully there's a better replacement for L2ARC and ZIL soon. It's easy and inexpensive for a desktop system to fit all of it's active files on solid state storage, but getting cool/cold storage to fit too is way out of budget.
Already did the homework and the experiments. L2ARC and ZIL promise to speed up only certain types of transactions but, even so, they don't do that either. L2ARC's tuning parameters force tuning tradeoffs in performance that don't need to be there. ZIL is targeting a use case that almost never happens. The default tuning is completely broken (0.01% hit ratio) and it requires creating configuration files to fix them.
I'm waiting for better documentation and features related to Special Allocation Class. I've read articles about people trying it and it sounds frustrating at this point.
Most desktops come with a 1TB SSD that's incredibly fast but a bit too small. Safely combining SSD and HDD needs to be easier or automatic.