I don't like change
But in this case it does make sense, and I'll get used to it eventually.
/usr
Debian is preparing to revise its default file system mapping to bring it in in line with other major distributions (like Fedora and CentOS). Evidence of the shift can be found in the bootstrap option that's arrived in its unstable branch, where Debian dev Ansgar Burchardt posted news that mailing list announcement: “ …
It's been a long time since I had separate /usr… in fact /usr used to be for users' home directories, it's in the name, user.
Binaries wound up there because someone many moons ago ran out of disk space, and so they developed this convention of "user binaries" versus "system binaries", hence the distinction between /bin and /usr/bin. This lead to /u (SCO) and /home (BSD, Minix, Linux) being where users' homes moved to.
Today, separate volume for /home, and perhaps /var in servers, is warranted, but not so much /usr, and it can harmlessly be folded back into /.
I've always understood usr to mean Unix System Resources and home to be for user home directories.
I have a Bell Labs paper on "Setting up Unix" which is undated, but hardware references to PDP and VAX, with a VAX disk assumed to be an RP06, would suggest mid- to late- 70s. It says
"The /usr file system contains commands and files that must be available (mounted) when the system is in multi-user mode."
There's a list of subdirectories, with comments like
bin: "Public commands, an overflow for /bin
There's no reference to a /home, but one of the examples of a passwd entry shows "login directories" of /usr/ghh and /usr/grk
I've been working with UNIX for 38 years (Bell Labs V6 onwards), and while I don't disagree with you, /usr has never been used for user files in my experience in all that time.I think I read in one of the histories of UNIX that it might have been used like this on the earliest PDP 8 releases (before my time), before they moved to PDP 11.
What was common was to actually have a /user filesystem in addition to /usr, although a convention adopted from BSD I think often had /u01, /u02 etc for user files.
IIRC, Sun introduced the concept of /home.
I think it was a matter of convention and knowledge. The install docs (V7 here, nroff source) for Bell Labs UNIX did not give very many hints about how to do it, and if all you did was to follow the docs with a single disk system, you would end up with a layout that probably left you with nowhere other than /usr to store user files (Sorry, I did have links to PDF formatted documents from the Lucent UNIX archive, but that appears to have disappeared - still, "groff -ms -T ascii filename" will make a reasonable attempt to format these for the screen).
On the first UNIX system I logged into in 1978 at Durham University, there was a separate /user filesystem which mapped to a complete RK05 disk pack (about 2.5MB per pack). / and /usr (and the swap partition) were on disk partitions on a separate RK05 disk pack. At this time in V6 and V7, disk partitions were compiled in to the disk driver (in the source), and IIRC, the default RK05 split was something like 25%, 60% and 15% for root, usr and swap.
Whilst I was there, the system admins. (mostly postgrad students) added a Plessey fixed disk that appeared as four RK05 disk packs, and allowed them to give ingres it's own filesystem. This happened at the same time that V7 was installed on the system, over Summer vacation in 1979.
When I installed my first UNIX system (1982, again V6 and later V7 UNIX), I kept a similar convention. although I had two 32MB CDC SMD disks to play with, configured as odd sized RP03 disks, and I split each of the disks up as either four quarter disks, 2 half disks or one complete disk - don't use overlapping disk partitions! (again in the device driver source of V7 UNIX). It was a very involved process getting UNIX onto these non-standard geometry disks, but that's a tale for another day.
During this time, I also had access to an Ultrix system which user /u01, /uo2 etc (BSD convention).
When I worked at AT&T (1986-1989), they also used the /u01, /u02... convention for user filesystems.
Following that, I've always had a /home filesystem for user files.
> probably left you with nowhere other than /usr to store user files
From the paper "Unix For Beginners - Second Edition" by Brian W. Kernighan, dated September 30, 1978:
""".. if you give the command pwd, it will print something like
/usr/your-name
This says that you are currently in the directory your-name, which is in turn in the directory usr, which is in turn in the root directory called by convention just /. (Even if it's not called /usr on your system, you will get something analogous. Make the corresponding changes and read on.)
...
"""here is a picture which may make this clearer:
(root)
/ | \
bin etc usr dev tmp
/|\ /|\ /|\ /|\ /|\ /|\
/ | \
adam eve mary (these are under usr)
...
> Today, separate volume for /home, and perhaps /var in servers, is warranted, but not so much /usr, and it can harmlessly be folded back into /.
Yes, in fact it's been done the wrong way round; /usr should have been scrapped and everything put back under / where it belonged originally.
Except if you wanted to keep compatibility, adding a symlink of /usr -> / would give you /usr/proc, /usr/boot, /usr/root, etc, which is a mess and not nice. Adding symlinks for /lib, /bin and /sbin to /usr/lib, /usr/bin and /usr/sbin on the other hand does not give you the mess.
So as far as cleaning up by merging /usr/lib, bin and sbin with /lib, bin and sbin, the way it was done was the cleaner option while keeping compatibility with all the scripts that assume where things will be located.
I am not convinced it had to be done at all, but I am not going to argue with people over it.
Nothing could ever replicate the mess of windows, which has a habit of vomitting any and all unstructured crap into WINDOWS, making directories in Program Files that contain single DLL's, and creating config and cache directories in My Documents instead of AppData where it belongs. Oh, and expecting / to be writable so it can bung any old directory it likes there while installing shit, instead of a TEMP directory.
....when it makes no sense.
There is a good reason for having a small root file system with all the essentials needed to run your OS, combined with larger file systems (that will fill up on occasion) for user data and applicatons. You should be able to go to single-user mode and unmount /usr and maybe resize it).
As a debian user since 2.0 in 1998.. (whose personal servers still split /usr /var and /home etc on different logical volumes)
is they keep system rescue utilities on /
But sounds like they won't.. stupid stupid.. sad to see.
reminds me of one time I tried to rescue a solaris system many years ago, the /usr was not accessible for whatever reason I forgot why at the time, and of course I couldn't even run 'ls', I had to rely on 'echo *' to see what files were on the system to try to find the command to mount or fsck or something.
oh well, like systemd that I haven't had the misfortune of using yet I guess I'll have to get used to this move too eventually.(*trying not to think about it*)
> These days, I think it is easier to rescue a system from a live CD.
Assuming you have easy physical access and a CDROM drive on the machine...
In any case, the modern flavor of Debian is already almost impossible to rescue from within, as I recently discovered when a "satellite" (absolutely unessential) filesystem could not be mounted, causing systemd to refuse to boot into anything even remotely usable. In the absence of CDROM drive around, I had to rip the boot drive out, mount it on another machine, and edit /dev/fstab there...
Yes, when did that start?
If a non-essential partition failed to mount in the Good Old Days, you just got an error in dmesg, and the system started up the best it could . You could then fix it.
Now, systemd gets all unhappy, and chucks you into a (useless, crippled) restore environment. Woe betide you on a headless server!
> THERE'S your problem, right there.
Yes, I know. Worst thing is, I knew it beforehand, but that's a duct-tape-and-2-pieces-of-string server, "productivity" in that everyone in the office uses it but it's mostly "designed" to take stupid load off of the really important systems. It holds the FTP repository for the office scanner -to allow scolding of the people abusing the scan-to-mail mechanism and ease the load on the mailserver. It's also the internal "playful" web server, to allow testing without messing with the team's official website. It hosts a nice document converter able to turn (almost) anything to structured text and back to office-popular format, and a few other utilities. It's basically where I put all the nice-to-have, but not mission-critical, office-related functions. It's never under critical load, by design. It's also designed to be the ultimately "agile" (for the buzzword addicts), and its uptime measures in years. The problem stemmed from that very "agility". There was a hot disk swap (handled by hand, sue me), and I forgot to edit /etc/fstab accordingly at the time. At the next reboot, years later (last week), the new target for mounting /home/ftp/scan/Documents had changed. A normal init system would have started everything it could and let me correct the problem from within. systemd just crashed, repeatedly, leaving me with no option but to rip the boot disk out, mount it on another machine and edit /etc/fstab from there before putting the boot disk back in. I was lucky I had easy access to the machine. As it's a headless server that I manage through ssh, I had to bring in a monitor in order to figure the problem out in the first place. Oh joy.
I think people tend to forget how dire the consequences of an init crash can be. To that respect, systemd can be a helpful reminder (always look at the half-full glass !)
I was lucky I had easy access to the machine. As it's a headless server that I manage through ssh, I had to bring in a monitor in order to figure the problem out in the first place. Oh joy.
It is times like that you realise that a USB serial dongle, null-modem cable and a laptop make a much less cumbersome means of getting local access than traditional VGA console.
> It is times like that you realise that a USB serial dongle, null-modem cable and a laptop make a much less cumbersome means of getting local access than traditional VGA console.
I keep thinking that a sensible init system and a ssh connection is yet a tiiiiny bit less cumbersome.
Otherwise, I'd rather rip the drive out and deal with it in the relative comfort of my, erm, let's call it "lab" and not "impenetrable junk fort" for the sake of the argument.
Lugging a laptop, usb dongles and a cables around does not meet my definition of an easy job. (will there be a port available? Is the dongle not knackered? Is the cable still good?). I'm a pretty strong believer in the "back to the basics" methods, whatever that may make me...
We clearly need a "backroom gremlin" icon.
>In any case, the modern flavor of Debian is already almost impossible to rescue from within, as I recently discovered when a "satellite" (absolutely unessential) filesystem could not be mounted, causing systemd to refuse to boot into anything even remotely usable. In the absence of CDROM drive around, I had to rip the boot drive out, mount it on another machine, and edit /dev/fstab there...
Same here, however, I managed to get into rescue mode and fix /etc/fstab ...
Now, I had this problem because, well, I had been given a fancy tower case with "window" and cable manglement bays (teen chose, wife bought), and I 'destroyed' a SATA cable closing the bloody case (rear panel, the one without 'window').
It is a Zalman z11 Plus ... great case, fancy and shit, replaced my ATX case from 2003 (noname, still perfect working condition, all black, looks like any grey ATX case you see near the trashcans on the pavement) ... wife loves the new case - I don't care, as long as I can stuff "shitloads of FLOPS"-producing components inside, watercooled.
Live CD? Really? Can't remember the last time I had a 'puter with a working CD (or DVD) drive. Oh, wait, maybe that tower next door has one: guess I should take a look.
Surely what matters is whether you can rescue from USB. Don't some machines have security features in BIOS that might get in the way of that?
"Don't some machines have security features in BIOS that might get in the way of [booting from USB]?"
If you're the administrator of a particular computer, then you'll have the BIOS password, if you don't, you're probably not supposed to be booting it off USB in the first place.
None of my systems use an initramfs. Being standard PCs, you only need to compile in disk and filesystem drivers to get them to boot.
The initramfs concept was invented for non-optimal distro kernels which have to boot on the 1% of obscure hardware platforms*. I see this all the time in Linux, large amounts of bloat to cope with edge cases.
*I can see a need for it for encrypted drives.
On a single spinning disk, different volumes just help to make it slower (heads need to move much more, while often-read/written data may end in the slower part of the hard disk), and still if the disk fails you lose everything. Splitting across different *physical* disks may make sense. On SSDs there's no the speed issue, of course.
Anyway, this change will impact some of my friends who were used to boast about the "superior" layout of Linux directories...
The physical disk failing is only one of many corruption scenarios.
It is still wise, if you don't want to faff with booting off CD, to have a separate /, /var and /home partitions. This means the system will likely still be bootable if corruption occurs, because the / partition is rarely written to and is therefore unlikely to get corrupted. A corruption in the oft-written to /var and /home partitions cannot make the system unbootable.
I'm glad to see that this change Debian are making will not affect us being able to sensibly partition our systems for stability.
> A corruption in the oft-written to /var and /home partitions cannot make the system unbootable.
If the init system is poking around in places it has no business with, it certainly can. Failure to mount /home/ftp/scan/Documents is apparently a critical failure preventing to boot anything...
That's a big "if". My systems do not do this.
"failure to boot due to unmountable drives" will be a problem if one of your filesystems is corrupted. But it doesn't mean the system is "unbootable" - simply boot in to single user mode (no rescue disk/cd/USB needed) and fix.
Even though I use it, I blame Ubuntu (and other distros) for making people forget some very good old habits, such as sane partitioning.
@DoctorSyntax: "That depends on what single user mode needs. If it needs executables moved from /sbin to /usr..."
I'm not bothered by this change of "moving /bin and /sbin to /usr" and it doesn't affect this.
I was saying to the chap above that if he partitions properly and separates out /home and /var (leaving /usr on the root partition), then he won't need to boot from usb or cd or whatever.
Single User Mode in Linux will boot fine from here. As would temporarily changing the boot commandline to run /bin/bash instead of init (for fixing really broken systems).
if he partitions properly and separates out /home and /var (leaving /usr on the root partition), then he won't need to boot from usb or cd or whatever.
There is one thing you are missing from this.
/usr generally contains a significantly larger amount of data than / and is written to more often, too. This increases the chances of data corruption, and increases the amount of time taken to repair it (or restore it from a backup).
Keeping a tiny root partition, very rarely written to, increases the chances of being able to boot a minimal system in the case of a problem, and reduces the time taken to get into such a system in case of (certain forms of) disaster.
>I was saying to the chap above that if he partitions properly and separates out /home and /var (leaving /usr on the root partition), then he won't need to boot from usb or cd or whatever. Single User Mode in Linux will boot fine from here.
All this chap is saying that on that particular machine, /home and /var are even on (several) separate _physical discs_; only systemd wouldn't allow boot in any mode (but useless journald or whatever) because a sub-sub-sub filesystem in /home couldn't be mounted...
Of course, serves me well for not ridding that particular system of systemd (most of my machines are systemd-free).
> But it doesn't mean the system is "unbootable" -
In the present case, yes it did. systemd stubbornly refused to even begin the boot, not in normal mode, not in single user mode, not even the usual busybox rescue system. Nichts niente nada nothing. Just because it could not mount a drive holding the only ftp repository used by the office scanner.
Having the init take care of mount/umount is clearly a brilliant idea...
[adamw@adam quick-fedora-mirror (client-filter *)]$ man systemd.mount
...
nofail
With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This
means that the boot will continue even if this mount point is not mounted successfully.
just give your non-critical mount point the option 'nofail' and systemd will happily continue if mounting it fails. It is, however, not in the business of trying to figure out whether mounts are critical or not, because it's a bit of a mug's game.
> It is, however, not in the business of trying to figure out whether mounts are critical or not, because it's a bit of a mug's game.
Instead it is in the business of crashing for no good reason, just in case, because surely that's the right thing to do, right? If in doubt, crash the system. Sensible default, that. Can't see anything possibly going wrong with that, no siree.
I know you're being sarcastic, but actually, *yes*. systemd isn't only used to boot conventional Linux distributions, note. If all it knows is that some filesystem can't be mounted but it has no information about how critical that is, just going ahead and booting the system anyway might be a *really bad idea*. The behaviour of an operating system if some of the filesystems it expects to be there are *not* in fact there is certainly not something anyone's defined. It could do anything, including something really bad that you didn't want to happen at all. Refusing to boot until the problem is fixed or systemd is told that it's OK to boot without the filesystem seems like a perfectly sensible choice to me.
> If all it knows is that some filesystem can't be mounted but it has no information about how critical that is, just going ahead and booting the system anyway might be a *really bad idea*.
Nopesy-nope. Not even a little bit.
>It could do anything, including something really bad that you didn't want to happen at all.
[citation needed]
Server-wise, refusal to boot is pretty close to the top of the "things you REALLY don't want to happen" list.
>Refusing to boot until the problem is fixed or systemd is told that it's OK to boot without the filesystem seems like a perfectly sensible choice to me.
Very sensible indeed. Especially on a headless server possibly located in a difficult-to-access area. That way you're sure that the system will never be fixed (no boot == no ssh access). You're partially right : most of the very nasty problems arise from the system being up and running. Nothing more secure than a switched-off server!
On a separate note I've also seen systemd crash on boot because of a slightly buggy keyboard driver.
Replying to myself here. systemd systematically(d) tries to outsmart the sysadmin, that is a consistant design choice, and a very bad one. "I'm sorry Dave, I'm afraid I can't do that" Yeah well, we're not lost in outer space and there are more serious init systems available, so systemd can go sing lullabies for whoever is interested, for all I care.
As a sidenote, "systemd isn't only used to boot conventional Linux distributions, note. " is entirely wrong. I'm yet to see a systemd port for MSWindows or BSD, or anything else than conventionnal Linux. systemd is indeed only used to boot (read "init") very specific Linux systems (namely, Poettering's own machine) to the exclusion of anything else. Unless I missed something, which is possible.
Citation needed? Oh, that's an easy one. Say we fail to mount /mnt/encryptedpartitionfullofsecretdata , and instead create /mnt/encryptedpartitionfullofsecretdata as a plain old subdirectory of unencrypted /mnt and start writing all that secret data to it. Whoops. Probably didn't want that, did you?
"systemd systematically(d) tries to outsmart the sysadmin" - I'm sorry, what? This is exactly the *opposite* of being smart. Trying to make decisions about what mount points it's OK to boot without versus what mount points it's not OK to boot without is *exactly* what would constitute 'trying to outsmart the sysadmin'. What it's doing in this case is exactly *not* trying to be smart, but simply providing settings with very concrete behaviours and respecting them. You can mark a mount point as required for boot or not required, and the default is the choice considered safest. What 'outsmarting', exactly, do you think is going on there?
This is not a valid case because it can't happen, for several reasons. If your partition is encrypted and mounted automatically at boot (which would be extremely sloppy because whole-disc enc is always preferable and autodecrypting an unessential partition at boot is idiotic, but let's assume), then some sort of user input is to be expected (encryption key perhaps?) Errors will then be dully reported and the unencryption process will fail, which wil allow for corrective actions to be taken. But let's assume (again) that the user is idiotic and that there is no check on the decryption. Even then, you (not so) secure app will try to write to nonexistent directories... which is not allowed. You will get errors of the type "unable to write to /mnt/encrypted/tax_returns: not a directory" because you can't just write to a nonexistent directory structure, creating it on the fly. So your (not so) secret data will be safe, you'll be alerted to the problem, you can check dmesg, see the failed mount and fix the problem over ssh (because the server is up). Total time needed to fix : 5 minutes if you're slow.
With systemd's idotic behaviour, you need to get on site and fix the problem on a "cold" system (I had to rip the drive out). Total time may vary but is NOT short, and the server was down all the time, meaning that other services were also unavailable, for all users. Dumbest possible way to handle things.
All that because systemd assumes it knows better, and thinks it might possibly be unsafe to boot when I tell it to ("I'm sorry Dave" etc). Of course, to answer your precise question "What 'outsmarting', exactly, do you think is going on there?" : none. None at all. Because systemd is moronic. It /tries/ to outsmart the admin, but consistently fails.
>In the present case, yes it did. systemd stubbornly refused to even begin the boot, not in normal mode, not in single user mode, not even the usual busybox rescue system.
I think there was probably more going on in your case ...
That is weird ... in my case, it was my backup "rusty" drive that failed to mount because I damaged a SATA cable closing the case ... yeah, I know ... silly me ... but then again, I have a dozen unused sata cables, so who cares. I had the same issue once before when I unplugged a drive ...
In both cases, I booted into recovery and commented out the line in fstab, this was on Debian Jessie, and the fucker booted.
On the other hand, I do think that systemd waiting 1.5 minutes, 90 seconds, FFS, for a drive to "settle" is embarrassing and tells you how good those devs are ...
Why it does refuse to boot when a filesystem cannot be mounted is an entirely other discussion .... first UNIX-like operating system I have used that reacts like that ... I am not young, have used "many".
I am sure there is a switch I can use with the configure script, then recompile systemd and it will be OK ... something like --act-intelligently-with-unmountable-file-systems, or something like that .... anyway, cannot be bothered, I am out of this systemd crap.
Soooo, systemd users are now officially obliged to use the tramp icon when they post comments regarding systemd, along with the Windows users mentioning MS, but they already know that, don't they ?
If you use systemd, your opinion does not longer count!
I will henceforth shut up until I get the crap off my box.
I perhaps had another problem preventing the thing from entering rescue mode. I did not really bother looking, it's a headless server in an uncomfortable place so it was easier to take the drive out than to bring a keyboard in. Refusing to boot because of a failed mount is utterly stupid, in any case.
For the person asking about the (almost) anything to structured text, it's an ugly shell script gluing together stuff like the Docutils tools, antiword, the UNO API from LibreOffice and even an OCR system (even though the result of submitting an image file is rather ugly). It's a bit of a mess really, as stuff was added over time and I keep postponing the overdue rewrite. You're probably able to make a better version for yourself.
The problem with PCs in general is that if you use the old DOS MBR partition system, you can only have 4 primary partitions, and everything else has to be in an extended partition in one of the primary partitions. This generally meant that Linux was installed in a single partition, as in a dual-boot system you could not guarantee that there was more than one partition available for filesystems.
On my laptop, I used to have a rarely used Windows 7 (32 bit) primary partition, two Ubuntu OS primary partitions (one my current use release, and the other either a previous or the next version of Ubuntu depending on where I was in evaluating the LTS releases), and an extended partition containing a /home filesystem and the swap space (plus any partition backups I wanted to keep).
When I got my latest 2nd user Thinkpad, I found that Windows used two primary partitions, adding a boot partition. I dropped one of the Ubuntu OS partitions, although I did reserve the space in the extended partition for it for future use.
I really need to think about migrating to Xenial Xerus, but I'm not 100% sure I can install Ubuntu in a secondary partition. Maybe I should just bite the bullet and do a dist-upgrade, but I am not comfortable clobbering my current daily use OS with no fallback.
Presumably, my next laptop will probably have a GPT, but that's no reason to replace my perfectly functional system.
Stupid PCs.
Physical disk corruption is a rare issue now. Journaled file systems are much more resistant to corruption. Disk themselves are better, i.e remaps bad sector silently, while SMART warns you when the disk is going to have big issues soon. A very different landscape from the 1990s.
Today, usually, when a disk fails it's some sort of mechanical or electronics failure that kills the whole disk, and using different partitions will help very little.
You can easily boot from USB today (even remotely one using one of the out of band management consoles), which makes very simple to inspect a faulty machine.
I have to say, I'm with all those who want /s?bin kept separate from /usr.
I have seen my fair share of disk corruption of one form or another. Having the essential utilities on the root partition while the rest reside on a different one makes sense. It keeps a "root-only" bootable and usable system available even if the rest of the partitions are corrupted, and gives a chance to recover the system without resorting to booting from CD/USB (I'm sure I'm not the only one who has spent ages hunting for a "rescue" disk to recover a botched system).
Although if people here are correct and systemd has screwed this path up, I guess the change makes little difference....
"Physical disk corruption is a rare issue now...
Today, usually, when a disk fails..."
Rare, usually...
Usually you don't read backups. The necessity for it is rare Does that mean you don't take them?
One of the requirements of system administration is to take precautions* against rare, unusual but potentially devastating events. Laying out the disk partitions to give you maximum chance of recovering from such events is a sensible precaution. This rearrangement isn't being proposed to aid that, it's being proposed to make what Debian calls "busy work" for developers. Sadly developers seem to be gradually losing touch with what they're developing is being used for. Is it surprising that Devuan was set up?
*Look carefully at that word. It tells you a lot.
This post has been deleted by its author
The idea is a lot older than Linux. Back when I ran a VAX with Ultrix 1.0 (Ultrix 1.0 was pretty pure BSD) the root filesystem was only 5MB and it was normal to have a copy on all the disks. That way when you came in in the morning to find the disk your root FS lived on had died over night you just had to switch around the drive numbers and you could reboot from any of the disks. The root FS would only have the small programs you couldn't live without and /usr got the luxury ones, so ed would be in root while luxuries like vi would be in /usr, /usr/ucb back then it only moved to /usr/bin later on.
initramfs, the new root file system.
Joking aside, much as I understand the logic behind initramfs and don't claim to have a better solution, the idea of booting a minimal OS then swapping it out for a bigger one always feels very ugly and dirty to me.
Yes the split /usr is an old unix leftover, not a linux invention.
Also since deboostrap is a script, and not compiled, the --merged-usr is a command line option, not a compile time option (which would also be rather stupid if debootstrap was in fact a compiled program. You want things optional at runtime, not chosen at compile time).
Sun introduced a filesystem layout back in the 80's with SunOS 2 (I think), where /usr was a largely imutable filesystem.
What this allowed was the /usr filesystem of a system serving diskless clients to share it's own /usr filesystem with the clients.
If anybody cares to remember, the diskless client model meant that Sun 2, 3 and 4 workstations could just be CPU, memory, display and network, with no local persistent storage. Back when SCSI disks were very expensive, this allowed you to centralise the cost in a large server, and keep the cost of the workstations down.
The model was that all filesystems were mounted over NFS, with /, and /var (a new filesystem in this model) mounted (IIRC - myy memory could be faulty and confused by the differences between the Sun and IBM models) from /export/root/clientname and /export/var/clientname on the server as read-write filesystems, and /usr, (and later /usr/share) mounted read-only, served either from the /usr and /usr/share if the clients ran the same architecture and OS level, or from some other location which mirrored /usr if the clients ran a different version (this allowed SPARC architecture systems to be served from Motorola ones, or vice-versa).
Directories such as /etc, /var/adm, /usr/spool, /usr/tmp, which would have been on read-only or read-mostly became symlinks into /var (which was unique to each client as it was mounted from a different directory on the server).
Other vendors including IBM and Digital adopted very similar layouts for clusters of diskless clients. With IBM in 1991, it appeared with AIX 3.2 (and refined in 3.2.5). The filesystem layout meant that no machine should really write into /usr except during an upgrade, containing any variable files into /var. Unfortunately, many people (including IBM software developers) forgot this, and over the years, software expected to be able to write into directories below /usr.
Interestingly, the IBM 9125-F2C, aka Power7 775, supercomputer running AIX reintroduced the concept of diskless clients in 2011. The filesystem layout was modified slightly, with the concept of a statefull read-only NFS filesystem (STNFS), which allowed changes to the read-only filesystem to be either cached in memory for the duration of the OS run (a bit like a filesystem Union), or files/directories to be point-to-point mounted over entities on the read-only filesystem into a read-write filesystem.
/ became a STNFS read only mount, /usr was a read-only filesystem, and /var was a read-write mount off of an NFS server. /tmp was left on the / filesystem, meaning files were lost on a reboot, and also that writing lots of files into /tmp reduced the amount of RAM the node had!
Work related filesystems were mounted over GPFS for performance (NFS was just too slow), although any paging did actually work over NFS (obviously, paging was a major no-no for these performance optimised machines, but we could not get AIX to run without a paging space).
Unfortunately, as I found out, the hot-swap process for adapters, run over RMC from the HMC (Hardware Management Console) had a habit of trying to construct scripts in /usr/adm/ras (on the read-only part of the file tree) to execute to enable the swap, and as a result, we were unable to hot-swap adapters, which caused problems on more than one occasion. I did raise a PMR with support/development, but had trouble arguing the problem through, as the systems were so niche, that the support droids could not understand the problem.
This was also why /bin and /sbin were kept around, since they contained statically-linked versions of binaries that would run without /usr being mounted, since in that case the shared libraries in /usr/lib would be inaccessible. This was useful if a network or boot problem meant that the NFS server where /usr was could not be reached.
I did raise a PMR with support/development, but had trouble arguing the problem through
Now see, that's why $DEITY made "var" and "usr" strings of the same length, so you could run sed on the binary, and trivially redirect the problem file accesses.
It's frankly disgusting how many complicated problems a little bit of clandestine brain surgery can solve quickly and easily.
The process was under the control of the HMC (Hardware Management Console). It would create the file, execute it, and delete it, and if anything failed, the entire process failed.
I'm no stranger to doing exactly as you suggest (even using hex editors to hand-hack binaries) to move files in awkward locations to better ones, but in this case, there was no point where I could break into the process to alter the location it was trying to use.
I even had a jail-broken HMC, and worked through how the process worked. It was using a script on the read-only filesystem (so immutable, even by changing the file on the server serving it - there was some strangeness in the NFS implementation where changes on the server were not picked up on the client, something to do with it being read-only mount and having NFS caching enabled), so while I could reboot the server to pick up the changes, that negated being able to hot-swap the PCIe cards.
We did the work. I just wanted to have the process fixed, because I have what sometimes appears a perverse desire to see defects fixed, rather than working around them (especially as I had already worked through the issue, and could point to exactly where the defect was).
Must be something to do with me having worked in Level 2 AIX support for a number of years. I really don't like having to tell people who are supposed to be providing support to me how to do their job.
I'm a really awkward customer!
There are edge cases, though. As already mentioned, the fact that rescue utilities won't be in / if /usr is mounted separately.
If /usr is folded into / it can create issues in multiboot systems, as / is tiny, but /usr is relatively large especially with desktop environments included.Granted this is an unusual case, and not one that would typically be used in a production system.
"Granted this is an unusual case, and not one that would typically be used in a production system."
I remember one fairly grim morning caused by a SCO system having an overnight process that wrote into the root partition. Overnight it had gone wrong and the partition was at 100%. Response to any command was slow and the box was a couple of hundred miles away so a reboot into single user was very much the last resort & might not have helped. I can't remember now how I managed to get it under control but it took a long time. Moral: be very careful how you partition systems and keep the partitions which you'll need in emergencies clear of everyday use.
> not one that would typically be used in a production system.
Depends. We have a server here which is used by non-techies. Big Data-crunching system, with a few different pre-defined 'pipelines' depending on the question asked. Definitely a production system : server for clinical genetics diagnosis. While a headless system in a server room would be preferable, the non-techies require a nice GUI or they won't use it, so the thing has a big-ass graphical card and a wide screen... I seem to recall that only the data resides on separate drives, with all "system" stuff crammed on a single-partition drive (doesn't matter much, in case of failure the strategy always was disc replacement with a fresh image anyway, none of the system stuff is supposed to change...)
Whilst smaller disks were one of the reasons for partitioning the actual allocation of files to partitions in the various partitioning schemes had a rationale part of which was the ability to recover via single user boot when one of the more active file systems got corrupted. Quite a few old Unix design decisions seem to have been allowed to go by the board as their rationales have been ignored.
I suppose it all works well as long as it works. After all, you can even run without backups - right up to the point where you need them.
> Quite a few old Unix design decisions seem to have been allowed to go by the board as their rationales have been ignored.
Ignored implies that the people making these decisions knew about but chose to ignore the rationale, I think it was more a case of the rationales were either forgotten or just simply not passed on to the next generation. Of course it's possible that the next generation simply chose to ignore the people who'd made the design decisions.
I've worked on HP-UX for eons and there is/was a design document which was circulated around inside the company which explained what sort of files went into which directories. But within a few years of it's publication it was being flouted. When asking the lab engineers why they'd put that particular file in that directory they'd usually never heard of the document or the fact that there were supposed to be rules. They just stuff things where they thought they should go.
In Sun there was also such a document, supervised by an architecture committee. Review by that committee was mandatory, and it had the power to block projects. Woe betide any project that tried to put files in the "wrong" place.
When new-fangled SVr4 ideas like "opt" came along there was much discussion on things like whether /etc/opt or /opt/etc was correct, but a decision was taken and enshrined in that document, and it had to be obeyed. It's one reason that Solaris still has at least some internal consistency.