The SFC can kiss my taint...
2-clause BSD all the way, baby!
Canonical is expanding Ubuntu's support for ZFS, an advanced file system originally developed by Sun Microsystems. Ubuntu's support is based on the ZFS on Linux project, which itself is based on code ported from OpenSolaris, Sun's open-source operating system. It is licensed under Sun's Common Development and Distribution …
Why does software need a license? If you're publishing the source code on the internet then you've already lost control of how it's used.
All licenses do is make money for lawyers.
(And provide yet another category of pointless things for geeks to argue about)
That was kind of the point of the BSD license - to write really good software so that it's used everywhere - for the good of everyone.
If the internet had been born in a GPL world, it would have never taken off, as no-one would have been able to advance their products without fragmenting the code, or not being able to earn a living.
Tom.
FreeBSD project is constantly releasing software under BSD license while Apple is a notorious consumer of BSD licensed software. Please verify how well the two are able to earn a living and come back to tell us.
GPL is nothing but a BSD with fangs intended to bite greedy corporations/developers. To put it in simple terms, GPL is like BSD with one restriction: you are not allowed to attach any more restrictions. Just give it away the same way you received it. Even Microsoft has come to understand this and has no problem with it.
As for fragmenting code, it is preferable compared to being locked-in into bad code.
Internet was born in a BSD world but GPL fueled its tremendous growth.
> Apple is a notorious consumer of BSD licensed software.
And thank zot they are; gives the non-expert access to a Posix userland. And ports.
Of course, it is outrageous what they charge for the OS and the free upgrades! I can't count what I've spent in the years since, keeping my 2011 unit up-to-date.
Meanwhile, the BSD folks keep working away garnering support and contributions based on the quality and utility of the results, not enforced by licence terms.
GPL is great for stand-alone applications, but its tainting issues (hence the LGPL), make it a difficult choice for tightly integrated systems like kernels and OSs. I'm sure when the Hurd arrives, it'll all be fine.
Re: Apple and FreeBSD
"Please verify how well the two are able to earn a living and come back to tell us."
From my perspective, both seem to be alive and kicking
I've been using ZFS on FreeBSD for several years now. My 2 workstations are both "entirely ZFS" on bootup, and the server is UFS for the OS, ZFS for the data. I did that for a few reasons, making recovery a little easier being one of them.
ZFS tweakage isn't that hard. My server has 4G RAM on it. Works just fine with ZFS. you just tweek some of the memory footprint variables a bit to compensate. Slight reduction in file system performance, but reliability replication compression etc. are all working fine. But yeah an un-tweeked ZFS file system will consume 8Gb of RAM if it gets a chance.
"I've been using ZFS on FreeBSD for several years now."
Why? How do you benefit from it? I honestly see zero actual utility in running ZFS on a couple home desktop machines and an (undoubtedly smallish) home server.
Note that "because I can" is a valid answer in my mind ... As is "because I want to". But is there any other reason? Does it actually do anything that you need in such a small system that isn't handled equally well with simpler, easier, less complicated tools?
I use ZFS on Linux but my rationale might be the same - I use ZFS because I value my data and I value my money. The first should go without saying but the second requires some explanation.
I need to be able to use RAID5 - RAID1 doesn't cut it in terms of how many droves I need. This means Btrfs is not an option. I am a huge fan of Btrfs because it offers some features that ZFS doesn't (also the other around of course) but I don't trust its RAID 5/6 implementation. Wheres ZFS is rock solid.
Then there's the caching support - I can configure an SSD as L2ARC cache without much hassle and have my huge (raltively) HDD array perform as well as an SSD after the cache has been warmed up (caveat emptor - on Linux the cache starts from zero after reboot)l. But with ZFS you can configure the cachine behavior per dataset (whether the ARC and/L2ARC cache metadata, metadata + data or none).
I have a friend who runs an 8-disk RAID-Z1 array with a 9th as hot spare and 6 Optane drives for cache. I did set it up for him and it only took like 5 min. Generally, ZFS is very easy to set up and maintain once you've wrapped your head around it. And the flexibility is great. As long as you don't have to shrink your zpool...
BTW, there are many important ZFS feature additions in the pipeline. I think adding a drive to a RAID set is one of them.
ZFS is filesystem, disk and archive/backup management wrapped up in one coherent and high performance package. Its like comparing a hand crank car to one with a start button.
What are these simpler, less complex tools? For every task I can think of for disk, filesystem, archival or backup task, the zfs way of doing it is simpler and requires only one tool. Even to get close to ZFS features under linux without ZFS you would have to use multiple complex tools.
Specifically for desktops, the ability to take snapshots for free - basically no compute or disk cost - allows you to take and discard thousands of snapshots. Oracle have for a while had a GNOME extension that acts as a time slider on the directory being viewed, and there exist tools for linux to do similar - although currently without the GUI integration that Oracle have done.
Once it starts making it in to more desktop installs, features like that will start to come more rapidly.
ZFS is able to roll back to previous snapshots. If an OS upgrade does not work properly and the root is on ZFS, then the whole upgrade can be rolled back.
ZFS includes several types of checksums, including sha256, which can be set at any time. Every byte written to storage will be covered by a checksum, and you can "scrub" your storage to verify that everything on it is correct.
ZFS includes several types of compression. This compression can be adjusted dynamically at any time.
ZFS has a raid5 implementation that closes the "write hole," and can be safely used without battery backup.
All of us need storage that is efficient and correct. This is not delivered as well on older filesystems (EXT2/3/4, XFS, NTFS, FFS).
BtrFS delivers some of this (it does not have a reliable raid5); it does deliver defrag, which ZFS does not.
ZFS is, however, the best file system for a number of uses, some of which work well in a home/personal environment. Microsoft is reimplementing some ZFS features into ReFS, and that will be widely deployed at some point as I understand it.
Internet was born in a BSD world but GPL fueled its tremendous growth. ... AC
And Drivers Futures in Easily Virtually Visualised Realities ..... for Mass Mogul Media Presentations, AC, is the Question posing here.
With New IMProbable News* Impossible to Deny be Almighty True and Perfectly Correct, is Lead Changed and Seeded to Others?
And whenever that is an Almighty Few is Absolute Command and Control Cracked and Wide Open for Commandeering ..... with AIMaster Pilot ProgramMING Testing of Global Operating Device Devices Protections Installed.
Such Permits an Almighty Few Every Opportunity to Lead with Worthy Merit Due Distinction for All Free AIdDevelopment Assistance Granted AIMaster Pilot ProgramMING Testing in Live Operational Virtual Environment Fields.
:-) What's not to Like and Love to Distraction and Attraction? Is that the Powerful Current Rush your Crushes and Crashes are All Missing?
* NEUKlearer HyperRadioProACTive IT
Internet was born in a BSD world but GPL fueled its tremendous growth.
The GPL does not advance the growth of the internet. No standards or protocols that published only GPL'd implementations have ever become widespread. Meanwhile, those with freer licensed code often do. Only rsyncd has come close. Try to name any. Meanwhile in the BSD and MIT/X camp there is everything from DNS to HTTP, SSH to NFS, NTP to SSL. You can even look to multimedia formats like MP4, JPEG, etc.
The necessity of interoperability with proprietary software is a fact of life. It's delusional to pretend otherwise. And GPL'd software is effectively proprietary, too, as once you've integrated more-liberally licensed code, you can't contribute changes back upstream again (unless the whose code-base is dual-licensed).
The problem with the BSD licence is that it doesn't oblige anyone downstream to distribute Source Code. This means someone could take your BSD-licenced code, make a slight change to make it incompatible with your original, cage it up and turn it into a proprietary product.
The GPL protects against this, by saying unequivocally: Not sharing is stealing.
Now I'll have to explain to even more CTOs (and Boards) why, exactly, their favorite nephew is directly responsible for the complete slowdown of the entire corporate infrastructure[0] ...and that no, the ability to install Ubuntu does not make said nephew a sysadmin, much less a systems architect.
Ah, well. I guess it's a living.
[0] Built on low-end hardware with minimal RAM, as such nephew-built systems usually are ...
Errr...I used to run Citrix (Metaframe...those were the daysies) on a P4 for 50 users and that had a single GB of memory after I upgraded it from 512MB. The whole point of Citrix and RDS is that you can use fewer resources than VDI. If you're assigning 2+GB per user you're doing remote desktop wrong or are using it for the wrong user population. It's very good for consistent user types, it's very bad for random desktop user scenarios with loads of different apps.
Considering the eye-watering memory recommendations that FreeNAS makes regarding using their software on a headless server, it's surprising that anyone would use it as the boot FS for a full-graphical workstation. I mean, it does really cool stuff, but most of that seems like overkill for a workstation. And even if Canonical is turning-off or turning-down a lot of the high-flying stuff in ZFS, it's still gotta be gobbling RAM like it's Christmas morning.
Its nice to have it on your root partition as a way of reverting any changes that break stuff.
SUSE has been defaulting to BTRFS for / for a while for the same reason. That runs in a reasonable amount of RAM, you just need a lot of spare space on the / partition. I cannot imagine ZFS will be much worse.
Stock FreeBSD works absolutely fine with 6GB of RAM for 4 x 10TB Iron Woolves.I get about 230MB/s transfer speeds, which is double what my gigabit network can cope with. I think I could probably go down to 4GB without significant performance loss, but I haven't tested it. I have virtual machines that perform other duties running in virtual machines with 384MB of allocated ram and 32GB of storage, and they work absolutely fine.
I can tell you never tried it. Memory requirements are no greater than any other file system. I have it running on ARM machines with 256-512MB of RAM just fine.
You only need tons of RAM if you use deduplication - and if you are using deduplication you better make damn sure you know what you are doing because if you merely _think_ you know what you are doing you will find yourself buying more disks and restoring from a backup as soon as your data size grows to a point where restoring from a backup is inconvenient.
DreamPlugs(*), Raspberry Pis and Toshiba AC100 laptops on the low end. RedSleeve 7 on <= ARMv6, CentOS 7 on >= ARMv7.
CentOS 7 on big stuff (aarch64 and x86-64).
(*) zfs-fuse on ARMv5 due to a really obscure issue when building kernel based ZoL for armv5tel, kernel based ZoL on ARMv6+.
Depends on the exact use case. ZFS performs better than most file systems on relatively dumb flash devices like SD cards with a bit of tuning. If you get an A1 or A2 rated card, the performance even on random writes is actually pretty decent.
Of course there is no reason you couldn't hang a SATA disk off it with a USB-SATA adapter if you need it for some intensive workload.
Considering the eye-watering memory recommendations that FreeNAS makes regarding using their software on a headless server, it's surprising that anyone would use it as the boot FS for a full-graphical workstation.
FreeNAS rather assumes it will be used for an industrial grade file server with loads of clients. I have an ancient (2007, when ZFS first arrived in FreeBSD) ZFS fileserver at home with 4GB of memory and a Pentium E2200 CPU @2.2GHz and it works well enough for our use. Similarly my world facing web/mail server is a Gigabyte Brix with 4GB and a Celeron N3000 @1.04/burst 2.08GHz using ZFS on an SSD. The trick is to reduce the maximum ARC size to about half your real memory (or less), the default eats a ridiculous 7/8ths which causes thrashing unless your memory is large enough for 1/8th to hold the OS and running programs. Oh, and don't enable dedup which really does require 1GB RAM per 1TB disk.
As for graphical workstations, the machine I'm typing this on has its boot zpool on an SSD and a second pool on spinning rust, but it's an 8-core 4GHz CPU with 32GB of memory, max ARC 16GB.
As far as I'm concerned the best feature of ZFS is no more fsck after power cuts and the 2nd best is backing up by ZFS snapshots.
"the best feature of ZFS is no more fsck after power cuts"
If you have that much of a problem with your power, I respectfully suggest that changing your file system is not the answer. What you really need is a UPS.
"the 2nd best is backing up by ZFS snapshots."
Snapshots are not backups, not by a long stretch.
"Snapshots are not backups, not by a long stretch."
But due to send/receive they make for a great backup solution whereby you send the minimal changes between snapshots to a remote site.
Exactly. I was daft enough to assume people would know about this capability and so would understand "backing up by snapshot". I have ZFS snapshots (base + incremental) mirrored in house and off site. The on machine snapshots also deal with the "oh bugger, deleted the wrong file" problem as well.
ZFS has a very good RAM cache. It doesn't need the RAM but it really helps with volumes containing a large number of files, like a NAS.
I'm trying ZFS caching of spinning rust RAID with NMVe and that seems to be pointless by design. Hopefully there's a better replacement for L2ARC and ZIL soon. It's easy and inexpensive for a desktop system to fit all of it's active files on solid state storage, but getting cool/cold storage to fit too is way out of budget.
Already did the homework and the experiments. L2ARC and ZIL promise to speed up only certain types of transactions but, even so, they don't do that either. L2ARC's tuning parameters force tuning tradeoffs in performance that don't need to be there. ZIL is targeting a use case that almost never happens. The default tuning is completely broken (0.01% hit ratio) and it requires creating configuration files to fix them.
I'm waiting for better documentation and features related to Special Allocation Class. I've read articles about people trying it and it sounds frustrating at this point.
Most desktops come with a 1TB SSD that's incredibly fast but a bit too small. Safely combining SSD and HDD needs to be easier or automatic.
Err, no, apparently you didn't, judging by your explanations.
ZIL turns sync operations into async operations. It is only ever read on an unclean shutdown. It isn't a write cache. Buffering is done in RAM.
L2ARC is populated by what is evicted from ARC and reset on export/reboot. Unless you have a very long running system with a working set that significantly exceeds your RAM, L2ARC will be we get even populated, let alone used.
Various things never get cached in ARC (e.g. IIRC sequential reads), because the win is typically not big enough. If it doesn't get cached in ARC, it will never be in L2ARC. So I'll hazard a guess your testing didn't account for anything but the naive case of cat-ing big files to /dev/null.
I tested it with a file server, mail server, web server, and (legal) Torrent client on a 1Gbps connection. All together there's about 200 to 300GB of live data for a 400GB L2ARC partition. Three days later, the L2ARC was empty and the spinning disks were handling all activity. Doing diagnostics found that ZFS mistakes even moderately large files for uncachable streaming data. I switched off that 'feature.' Now the L2ARC was populating, but only at a few MB/hour. ZFS's default L2ARC population rate throttled to a peak of 8MB/sec. To be exact, it's 8MB/sec * duty cycle - essentially nothing. That has to be tuned too.
After hours of effort and two days of warm-up, the cache is working well enough that the spinning rust is free to spend time on things that really aren't cachable. I feel like overall the tuning is fragile and shouldn't be trusted.
The spinning rust is a RAID of the oldest disks I have - 10 year old 2TB WD Green drives, I think. Higher capacity new disks go into the backup system. Older disks taken from the backup system go into the server. It's correct from a storage sizing perspective but backwards for performance. That's why I was looking to ZFS for help.
FreeNAS's memory requirement comes from neither FreeBSD nor ZFS, but their "interesting" choice of stack layered on top. It's memory-hungry and slow on the server side thanks to being managed through a Django web app, and memory-hungry and slow on the browser side due to the use of one of those dreadful kitchen-sink JavaScript frameworks.
Saying that, I run FreeNAS 11 on one of those nasty Celeron G1610 Microservers with 4GB of RAM, and it works fine provided I don't need to go near the web interface. Even dedupe works fine on such a small system, provided the recordsize is cranked up to 1M or even 16M so the DDT doesn't get out of hand.
> The group said that Oracle could "instantly resolve the situation" by relicensing ZFS under GPLv2.
What?
You could "instantly resolve the situation" by changing *your* lawyer-friendly license. Your license is the restriction (other opensource projects have no problems), and you want others to change to your rules so you can benefit from their software?
How would you react if Sony said "We are unable to use Linux in our products due to their licensing restrictions. Linus could instantly resolve the situation by relicensing"
Anyway, whatever your feelings on the GPL - even if you love it - you should realise that such a stupid statement smacks of irrational fanboism and arrogance, and the PR guy who wrote it should be sacked, for the companies own good.
So Sony instead of using the code for free and just obey the rules, they would ask Linus to put a proprietary license on Linux and pay for each copy of the software they will use ? Doesn't look like capitalism to me.
Gave you a down vote because you fail to notice all licenses are lawyer friendly, not only GPL. I know you meant "developer-friendly" but still....
This is oracle we're talking about. They're probably just waiting for widespread adoption before suing everyone running Linux for having their dubiously licensed kit. I'm sure they will be merciful and allow their victims to buy Oracle Linux licenses as a remedy (one per installed kernel, just to be safe).
Updated version of their logo ->
"This is oracle we're talking about. They're probably just waiting for widespread adoption before suing everyone running Linux for having their dubiously licensed kit"
These days, Oracle's become a bunch of litigation-hungry lawyers with an IT department.
Seriously though, proprietary Oracle ZFS and OpenZFS are different beasts and, since 2010 when Oracle ZFS went closed source, increasingly divergent beasts. Before they were unfortunately taken over by Oracle, Sun Microsystems did intentionally put ZFS under an open source CDDL licence so Oracle would be foolish to try to take this matter to court.
hey - it's NOT that complex you know... just because YOU seem to be afraid to dive under the hood and tinker with it, doesn't mean the REST of the world must behave like a spineless coward.
however, a cautionary "skim over the manual" is probably wise for ANY new kind of OS toy.
here's my tweeks:
vfs.zfs.arc_max="2147483648"
vfs.zfs.arc_min="536870912"
Helps a LOT on an older 8Gb RAM workstation. On a 4Gb RAM server box, I use about half those values.
... it does not hold that ZFS will get there just because ext4 did.
I believe that ext4 is somewhat backwards compatible with ext2 and ext3. My impression is that ZFS isn't. May not make a difference to those who believe that newer and shinier is always better. But a large fraction of the world operates on the "Whatever can go wrong will go wrong sooner or later" principle. Backward compatibility does matter to them.
I don't need ZFS on a desktop or laptop.
I do... Had a WD drive run fine for years, then it just started silently developing unreadable blocks and corrupting files. At least ZFS would have avoided that SILENTLY part, while RAID-Z would have prevented data loss.
Still, it's a shame BTRFS has taken so long to develop into a suitable option, and instead RHEL is determined to essentially back-port modern features into XFS (reflinks for batch/offline dedupe, and VDO for compression), and is similarly giving up and adoption ZFS as well.
Heavy resource utilisation? Yeah, it could be back in the past when Sun launched it and customers tried it out on servers with tiny memory sizes and oodles of spindles. Today? Not so much.
As to ease of management, it is by far the easiest reasonably capable storage management solution I've used and I've used a fair number (AIX - quite nice, Solaris Disksuite, Veritas Volume manager, something that was branded otherwise but looked a lot like Veritas VM, and a few others).
We've happily based our data storage (but not root filesystems) on ZFS since 2016, although we don't have a huge amount of data. We're not religious about any aspect of software licensing, just endeavour to get our jobs done while staying on the right side of the law.
Our main server is a 12 year old Dell with 12GB RAM (only a fraction of which is needed) running CentOS + ZFS on Linux, with 2 x 4TB RAID-1 on decent spinning disks (ZFS does the RAID for us). We have an identical backup server to which we zfs send/receive hourly over Gigabit Ethernet and this usually takes a few seconds. With cron we create hourly, daily, weekly and monthly snapshots, with jobs to prune these so we always have access to daily ones for last 10 days, weekly for the last 6 weeks, and monthly ever since we created the pool. We quite often make use of files in historical snapshots. I've managed pairs of NetApp filers, but our current needs are met at a fraction of the cost.
30 miles away, we have a live offsite server for DR. It's a Raspberry Pi with 1GB RAM running Raspbian Stretch with cheap 2.5" external HD storage to hold the ZFS pool. It gets sent the same hourly streams via OpenVPN over the public Internet, with each sync usually taking just a few minutes, but when it's longer that doesn't matter. If our main office is burnt down, we shouldn't have lost more than an hour's saved work (assuming we get out alive).
All 3 servers scrub their pool weekly. I've never had a bad block on the main 2 servers, and only had to rebuild the Pi pool a couple of times (bringing it back to the office to sync, which then takes a day or so). On CentOS, with kmod flavour of ZFS modules we don't have to rebuild these for every kernel update - on the Pi rebuild is a bit more tedious, but not required very often.
Several of our laptops have internal or external ZFS storage, on HD or SSD, using ZFS on Linux or OpenZFS on OSX on macOS (yes, they're compatible if you know what you're doing). These can be re-synced when onsite typically in just a few minutes, even if they're several days out of date. My own laptop runs ZFS on Linux on Debian Stretch.
My biggest wish is to see CentOS and Debian provide ZFS as standard packages, but I'm not holding my breath. The other area of improvement I'd like is getting the maintainers of the Linux and macOS forks of ZFS to make it easier to create pools without platform-specific additional features that make them non-portable.