btrfs...ugh, oh and i recommend s3qlfs
"while reducing the amount of expertise needed to deal with situations like running out of disk space"
"had to laugh, since when has running out of disk space required expertise? Delete the old shit you no longer/never need(ed)/wanted in the first place, or buy a bigger disk."
You'd think so wouldn't you? Until VERY recently, you could fill a btrfs disk, go to delete something, and find it will not even let you delete anything because (due to journalling or whatever) deletes initially require additional space. Also with deduplication, compression, etc., a deletion may not free space (you delete something, and there's another file with duplicate blocks or a snapshot or whatever.)
I used btrfs for a main filesystem on one system, and some external storage. Unlike several years back, I no longer had data corruption issues (I don't know what was going wrong, it's supposed to have data integrity checks and whatever else, but with compressed files every so often I could md5sum the same file and get the wrong response one run, the right one the next. ) But after virtually any unclean shutdown, I'd have it go read only at inopportune moments; the data integrity features would recognize when it hit whatever file that the thing probably powered down mid-write on, then go read only; even if you remounted and just wanted to delete the file, no dice, it'd go read only again; no fsck, and no clear description online on how to recover from that without reading everything off, reformatting, and putting everything back on.
What I've been using recently, s3qlfs (running on top of ext4). Long story short, on several storage drives (ext4 filesystem) I've made a "s3ql-fs" mount point, "s3ql-data" directory for it to store it's database and up to 10MB data blocks in (it supports cloud storage like S3 and 4 or 5 others, but I'm using it with the local disk storage backend). Put like a 50GB cache on there, away you go. It does deduplication and compression, only uses about 32MB of RAM, burns through a bit of CPU time but for example I have a 4TB USB external currently, with *looks* 4.34TB of stuff on it, using 2.77TB of space. I can run it off my "slowputer" (1ghz core solo) and it'll max out a USB2 disk whether running out of cache or not; on a more modern system, I get like 80MB/sec from the block storage and full disk speed (for spinning rust) out of the cache (or into the cache, the writes go into there too so you don't have any weird write slowdowns from deduplication).
On my home system I even threw some Virtualbox VMs on there and it works fine, saving all kinds of space.
Oh, and as a bonus.. the same type of power cuts or USB disk unplugging that hosed btrfs?Since I'm running ext4 as my "real" filesystem, fsck fixes virtually all ills, the base filesystem doesn't lose anything. The s3qlfs on top of it, if a file is cut off mid-write, it can go read-only trying to read it. Unmount s3qlfs, go to s3ql-data and "find -size 0", delete the 0-sized files, run fsck on the filesystem and it'll tell you which files were f'ed up (put them in lost+found too usually), remount filesystem and done.