@ricegf2 - Posts after my own heart
I could not agree more with what you are saying.
Some people in this comment trail have been saying that the names of the UNIX/Linux filesystems are cryptic. This is not the case, as they all have meaning, although like all things UNIX, the meaning may have been lost a little in the abbreviation. I will attempt to shed some light on this, although this will look more like an essay than a comment. Please bear with me.
Starting with Bell Labs. UNIX distributions up to Version/Edition 7 circa 1976-1982.
/ or root was the top level filesystem, and originally had enough of the system to allow it to boot (so /bin contained all of the binaries (bin - binaries, geddit) necessary to get the system up to the point where it could mount the other filesystems. It included the directories /lib and /etc, which I will mention in more detail later.
/usr was a filesystem, and was originally contained all of the files users would use in addition to what was in /, including /usr/bin which contained binaries for programs used by users. On very early UNIX systems, user home directories were normally present under this directory.
/tmp is exactly what it says it is, a world writeable space for temporary files that will be cleaned up (normally) automatically, often at system boot.
/users was a filesystem used by convention adopted by some Universities as an alternative for holding the home directories of the users.
/lib and /usr/lib were directories used to store library files. The convention was very much like /bin and /usr/bin, with /lib used for libraries required to boot the system, and /usr/lib for other libraries. Remember that at this time, all binaries were compiled statically, as there were no dynamic libraries or run-time linking/binding.
/etc quite literally stands for ETCetera, a location for other files, often configuration and system wide files (like passwd, wtmp, gettydefs etc. (geddit?)) that did not merit their own filesystem. With all configuration files, there was normally a hierarchy, where a program would use environment variables as the first location for options, then files stored in the users home directory, and then the system-wide config files stored in the relevant etc directory (more on this below).
/dev was a directory that contained the device entries (UNIX always treats devices as files, and this is where all devices were referenced). Most files in this directory are what are referred to as "special files", and are used to access devices through their device driver code (indexed with Major and Minor device numbers) using an extended form of the normal UNIX filesystem semantics.
/mnt was a generic mount point used as a convenient point to mount other filesystems. It was normally empty on early UNIXes.
When BSD (the add-on tape to Version 6/7, and also the complete Interdata32 and VAX releases) came along (around 1978-1980), the following filesystems were normally added.
/u01, /u02 ..... Directories to allow the home directories of users to be spread across several filesystems and ultimately disk spindles (this was by convention).
/usr/tmp A directory sometimes overmounted with a filesystem used as an alternative to /tmp for many user related applications (e.g. vi).
I think that /sbin and /usr/sbin (System BINaries, I believe) also appeared around this time, as locations for utilities that were only needed by system administrators, and thus could be excluded by the path and directory permissions from non-privileged users.
Things remained like this until UNIX became more networked with the appearance of network capable UNIXes, particularly SunOS. When diskless workstations arrived around 1983, the filesystems got shaken up a bit.
/ and /usr became read-only (at least on diskless systems)
/var was introduced to hold VARiable data (a meaningful name again), and had much of the configuration data from the normal locations in /etc moved into places like /var/etc, with symlinks (introduced in BSD with the BSD Fast Filesystem) allowing the files to be referenced from their normal location. /usr/tmp became a link to /var/tmp.
/home was introduced and caught on in most UNIX implementation as the place where all home directories would be located.
/export used as a location to hold system specific filesystems to me mounted over the network (read on to find out what this means)
/usr/share was also introduced to hold read-only non-executable files, mainly documentation.
About this time the following were also adopted by convention.
/opt started appearing as a location for OPTional software, often acquired as source and compiled locally.
/usr/local and /local often became the location of locally written software.
In most cases for /var, /opt, /usr/local, it was normal to duplicate the bin, etc and lib convention of locating binaries and system-wide (as opposed to user-local) configuration files and libraries, so for example a tool in /opt/bin normally had it's system-wide configuration files stored in /opt/etc, and any specific library files in /opt/lib. Consistent and simple.
The benefit of re-organising the filesystems into read-only and read-write filesystems was so that a diskless environment could be set up with most of the system related filesystems (/ and /usr in particular) stored on a server, and mounted (normally with NFS) by any diskless client of the right architecture in the environment. Different architecture systems could be served in a heterogeneous environment by having / and /usr for each architecture served from different directories on the server, which could be a different architecture from the clients (like Sun3 and Sparc servers).
/var also became mounted across the network, but each diskless system had their own copy, stored in /export/var on the server, so that things like system names, network settings and the like could be kept distinct for each system.
/usr/share was naturally shared read-only across all of the systems, even of different architectures, as it did not contain binaries.
This meant that you effectively had a single system image for all similar systems in the environment. This enabled system administrators to roll out updates by building new copies of / and /usr on the server, and tweaking the mount points to upgrade the entire environment at the next reboot. Adding a system meant setting up the var directory for the system below /exports, adding the bootp information, connecting it to the network, and powering it up.
And by holding the users home directories in mountable directories, it enabled a user's home directory to be available on all systems in the environment. Sun really meant it when they said "The Network IS the Computer". Every system effectively became the same as far as the users were concerned, so there was no such thing as a Personal Computer or Workstation. They could log on on any system, and as an extension, could remotely log on across the network to special servers that may have had expensive licensed software or particular devices or resources (like faster processors or more memory), using X11 to bring the session back to the workstation they were using, and have their environment present on those systems as well.
As you can see, this was how it was pretty much before Windows even existed.
Linux adopted much of this, but the Linux new-comers, often having grown up with Windows before switching to Linux, have seriously muddied the water. Unfortunately, many of them have not learned the UNIX way of doing things, so have never understood it, and have seriously broken some of the concepts. They don't understand why / and /usr were read-only, so ended up putting configuration files in /etc, rather in /var and using symlinks. They have introduced things like .kde, .kde2, .gnome, and .gnome2 as additional places for config data. And putting the RPM and deb database in /usr/lib was just plain stupid, as it makes it no longer possible to make /usr read-only. They have mostly made default installations have a single huge root filesystem encompassing /usr and /var and /tmp (mostly because of the limited partitioning available on DOS/Windows partitioned disks). They have even stuck some system wide configuration files away from the accepted UNIX locations
So I'm afraid that from a UNIX users perspective, although many of the Linux people attempt to do the 'right-thing', they are working from what was a working model, broken by their Linux peers. Still, it's better than Windows, and is still fixable with the right level of knowledge.
I could go on. I've not mentoned /proc, /devfs, /usbfs or any of the udev or dbus special filesystem, or how /mnt has changed and /media, nor have I considered multiple users, user and group permissions, NIS, and mount permissions on remote filesystems, but it's time to call it a day. I hope it enlightened some of you.
I have written this from memory, based on personal experience of Bell Labs. UNIX V6/7 with BSD 2.3 and 2.6 add-on tapes, BSD 4.1, 4.2 and 4.3, AT&T SVR2, 3 and 4, SunOS 2, 3, 4 and 5 (Solaris). Digital/Tru64 UNIX, IBM AIX and various Linux's (mainly RedHat, and Ubuntu), along with many other UNIX and Linux variants, mostly forgotten. I may have mixed some things up, and different commercial vendors introduced some things in different ways and at different times, but I believe that it is broadly correct, IMHO.