back to article One person's shortcut was another's long road to panic

Why hello, dear reader – fancy seeing you here again on a Monday – the slot we The Register reserves for a fresh installment of Who, Me? in which Register readers share their tales of tech tribulations. This week meet a reader we'll Regomize as "Bart" who once held the grand title of "Scientist" at a research lab which did a …

  1. UCAP Silver badge
    Unhappy

    Oops!

    I would like to say that IT professional would have built a symlink-check into their code to ensure that it did not scan outside of the "junk" folder. Sadly, in my experience, that is too often not the case ...

    1. This post has been deleted by its author

      1. Wanting more

        Re: Oops!

        and where were the backups? I've had a 2Tb hard drive fail and lost a lot of data (nothing important fortunately, mostly my stash of very legal media downloads), so I'm a bit paranoid now and raid systems don't fix theft, or user error.

        1. hedgie Bronze badge

          Re: Oops!

          I have lost a *lot* of data in one go from not verifying a backup on an HDD. An HDD that had hitherto then had / swap /home on it, and was getting nuked and turned into a data-only drive since I had just picked up an SSD for the system stuff. Some of it (film scans), I could redo, but everything that was taken on a digital camera initially was gone. Including the one bloody photo that I’ve actually been paid to print for someone.

        2. rcxb Silver badge

          Re: Oops!

          and where were the backups?

          Article is talking about time-sensitive recently processed data, not archives. How many times per day do you run backups on your systems?

          1. Quando

            Re: Oops!

            > How many times per day do you run backups on your systems?

            Six, rotating between two destinations, for my main workstation.

            1. Killfalcon

              Re: Oops!

              You can have too much of a good thing, especially if you're having to pay for it.

              That's quite expensive when you're looking at terabyte processes, and has a high risk of being the reason runs fail. Back in 2010ish, my lot (finance actuarial stuff, generating hundreds of TB per year) were spending six figures on daily backups, and those ran out-of-hours to minimise the risk of trying to backup a file that's part processed, or accidentally lock a file that needed to be edited by the simulation software (etc). We did have on-demand access to the last 30 dailies, the last 12 month-ends, and last seven year-end backups, though, which was absolutely worth the cost.

          2. phuzz Silver badge

            Re: Oops!

            How many times per day do you run backups on your systems?

            At my last job I was creating a new snapshot every hour during business hours on the file server, keeping (I think) the last 12 hourly snapshots. (And then daily/weekly/monthly rotations, backing up to tape etc.). That was for normal user files (spreadsheets and the like) and worked well, and give me very quick restores for the "oops I just overwrote a file I need in five minutes" type requests.

    2. Pete Sdev

      Re: Oops!

      In all fairness, Bart was employed as a scientist not a sysadmin.

      Ignoring *all* symlinks might have had side-effects, there could have been legitimate symlinks inside the data directories.

      To be robust, you'd have to check all symlinks and ignore only those that go outside a specific level. And repeat the check if it's a symlink to a symlink...

      1. Terry 6 Silver badge

        Re: Oops!

        I suspect that an awful lot of techie minded people doing moderately techie side jobs would not know or not realise the power of those things. In effect, that such a link, unless otherwise excluded, actually does act as if the relevant folder is inside the one you are working on.

        1. Anonymous Coward
          Anonymous Coward

          Re: Oops!

          30ish years ago I worked for a (very well-known) multinational as a "User Analyst" in the central regional warehouse. My job contained many different tasks, one of which was quickly locating all items of a specific SKU with a specific production batch code, typically for manufacturing recalls. The batch code was stored in the logistics database, but it was not possible from within the UI to connect it to the actual physical locations. I was told by my predecessor that I could possibly run a query against the SQL database to find what I needed, but that it required that I had some more access than I was initially given and that he'd try to arrange it for me. Not ten minutes later, the IT manager sent me a message over the corporate email system containing the root password for the servers running the whole system. It was five characters long, with the last two being the current year.

          Now I design and run the national backbone of a telecoms provider / ISP. Our passwords are longer than 5 characters.

          1. Mishak Silver badge

            If you share you password with me...

            I keep "upsetting" our IT support by refusing to share my password with them when "they need to make changes". Isn't that what admin accounts are for?

            1. JimC

              Re: If you share you password with me...

              Yes its appalling practice. But in the days before widespread remote management it was also mightily convenient, especially as its also appalling practice for first and second line IT support to have admin access. The temptation was always there, and less IT aware management would often approve doing it, no matter how much the likes of people like me objected on principle. I'm moderately surprised it still happens though because now there are alternatives.

              1. Mishak Silver badge

                Re: If you share you password with me...

                This was when they were using remote access...

            2. John Brown (no body) Silver badge

              Re: If you share you password with me...

              "I keep "upsetting" our IT support by refusing to share my password with them when "they need to make changes". Isn't that what admin accounts are for?"

              Where I work, just asking for someone's password starts a disciplinary process if the person being asked is a bastard or suspects it's a "security check" and reports it

          2. Anonymous Coward
            Anonymous Coward

            Re: Oops!

            "Our passwords are longer than 5 characters."

            Ahh, so you append the entire year to the password, instead of just the last two digits... Y2K compliant security policy right there!

            1. Strahd Ivarius Silver badge
              Trollface

              Re: Oops!

              no, it is simply "password"...

              1. Grinning Bandicoot

                Re: Oops!

                p10ssw0r13

                p2^3+2^1ssw0r2^4-2^1

                Halt or I'll shoot

      2. Richard Tobin

        Re: Oops!

        There's no reason to follow symbolic links in a program like this. If the symbolic link is to outside the relevant filesystem (or subtree), it shouldn't be followed. And if it's inside the filesystem, there's no need to follow it because you will look at the destination directory anyway.

        1. Pete Sdev

          Re: Oops!

          it's inside the filesystem, there's no need to follow it because you will look at the destination directory anyway.

          Depending on the tool/code used, completely ignoring symlinks could lead to having dead symlinks littering around.

          1. spuck

            Re: Oops!

            But there is a difference between following symlinks and ignoring them.

      3. ldo Silver badge

        Re: Ignoring *all* symlinks might have had side-effects ...

        Not sure why that would be a bad thing. Surely the items concerned would be found via real links at some other point?

    3. Richard Tobin

      Re: Oops!

      This problem was encountered pretty much as soon as symbolic links were introduced in 4.2BSD. Each utility that traversed the filesystem (du, find, etc) had to have a flag added to indicate whether symbolic links should be followed. I remember a version of SunOS in the mid-1980s whose cron job to remove old files in /tmp followed symbolic links, with predictable results.

      1. ldo Silver badge

        Re: pretty much as soon as symbolic links were introduced

        Jeremy “Samba” Allison reckons that symlinks are fundamentally flawed. Not sure I entirely agree. Though it took some work to tame them, like introducing the openat2() call.

    4. The Man Who Fell To Earth Silver badge
      Boffin

      Re: Oops!

      #1 rule of programming of any kind: People are idiots. Write your program the first time assuming every stupid thing possible for it will be encountered eventually.

      1. Strahd Ivarius Silver badge

        Re: Oops!

        It is impossible to assume every stupid thing possible, idiots are so inventive...

        I had one time made a rigorous list of all possible error one could encounter with data provided by users.

        The list of errors had been validated by the product owner an his team, they had added some specific use cases I was not aware of (they had been running a similar system for years, the new version was a full rewrite with new display systems).

        The data was to be provided on CD-ROM by an external company.

        With the first delivery of test data, we got one CD with no file, only the catalog...

      2. An_Old_Dog Silver badge
        Joke

        Re: Oops!

        I'm not smart enough to think stupid enough.

    5. Jou (Mxyzptlk) Silver badge

      Re: Oops!

      Symlink, then hardlink, then junction, then kernel-builtin specials etc. It is easy to be Captain Hindsight. What is your excuse for not being at NASA at that time and prevent this?

      1. FirstTangoInParis Silver badge

        Re: Oops!

        Indeed, most experience I would suggest is based on mistakes by yourself and others. Most safety rules exist because someone got hurt doing whatever. We know deadly nightshade is poisonous because likely some people consumed it and died.

        Anyhow, a few tales; of the network drive where loads of data was lost because the backups didn’t work. Or when trialling DR, I backed up a large partition and mistakenly restored to a much smaller root partition, causing a panic. Or emptying those SunOS wastebaskets by deleting the .wastebasket directory before nightly backup only to discover if you did it when the user was working late and still logged on, his desktop crashed, meaning a much more complicated script was needed.

  2. Korev Silver badge
    Coat

    So, a symlink set off a whole chain of events...

    1. b0llchit Silver badge
      Coat

      Until the chain broke?

  3. Prst. V.Jeltz Silver badge

    You can make similar cockups with the /MIR function of Robocopy . Its reluctant to follow Microsofts version of symlinks though.

    Lucky escape there for Bart , although hopefully the 2.5 petabytes of storage was backed up.

  4. SVD_NL Silver badge

    Genuinely curious...

    ...why this colleague made a symlink to the root folder of a different server in the junk folder.

    It seems very unlikely this was an accident, the only reason i can think of was that he wanted to quickly move folders from the junk folder to the storage server (perhaps incentivised by Bart's agressive purging strategy).

    Also, IMO the best strategy here would've probably been running the script as a user or service with access limited to the junk folder. This limits any and all damage to the junk folder, no matter how stupid other people are!

    1. theOtherJT Silver badge

      Re: Genuinely curious...

      Good idea if possible, but given posix permissions you can only have owner, group, other. If "owner" of the junk files was whoever ran the script, and "group" was, for example, the working group that user was a member of, there's no room left for a separate group for the operator account that runs the script. Either it has to be a member of all the potential working groups (which won't help you, if those groups own files on other servers, which they almost certainly do in a large shared environment where all the groups are coming from LDAP or similar) or it has to be run as root, or some root equivalent power.

      Depending on how long ago this was and what filesystem they were using, extended ACLs may not have been an option.

      I'm thinking this is where the "do not cross filesystem boundaries" option to find is going to be your friend.

      1. doublelayer Silver badge

        Re: Genuinely curious...

        I suppose you could try running your script in a chroot of the directory concerned, which depending on where the link was going might or might not prevent the program from going there as well. However, when you get to the point of involving chroot, you're also at the level where you could write explicit symlink logic. It sounds like this script had not gotten to either level.

    2. doublelayer Silver badge

      Re: Genuinely curious...

      One option is that there was a script somewhere which used relative paths and moving the script somewhere else was harder than just linking in the data for it to work on. I've had the experience, in fact I'm having the experience right now, of a script that's not written well but it would theoretically be faster for me to work around its errors rather than going in and fixing it. For instance, a script I have which needs Protobuf and does not work with modern versions of Protobuf. If this were an important part of a system, it would make sense for me to rewrite the logic to use modern behavior, which shouldn't be too hard (I didn't write the initial version), but since I run it manually and on offline data once every six months, I just keep around an old copy of Protobuf in there.

  5. Michael H.F. Wilkinson Silver badge

    Ouch!

    I do remember one script I wrote to back up stuff getting into an infinite loop because someone had made a symlink loop in their directory structure. This resulted in loads of extra copies on the backup drive before I could stop it. Changed the script to ignore symlinks. Fairly harmless, but annoying as I had to clean up the back-up manually

    1. Joe W Silver badge
      Pint

      Re: Ouch!

      Ah, joy of joys, the self taught rsync users and their (i.e. my!) self written backup scripts...

      "teehing troubles"... right?

      Meh. Long time ago. And storage _was_ a premium, my time wasn't. I need a drink, I think. Too early, though, and have some things that are not compatible with day time drinking. Not like that time when we had some time to kill after a conference, and we went on a tapas tour in the late morning (until the evening, when we had to head to the airport...) in Honululu, eating small dishes and drinking Mai Tais...

    2. druck Silver badge

      Re: Ouch!

      Well there's two test cases to remember anytime you are writing a script which does directory transversal;

      1. symlink outside the target directory structure

      2. symlink loop

      1. ldo Silver badge

        Re: writing a script which does directory transversal

        Here’s an example:

        ␣␣␣␣for item in os.listdir(dirname) :

        ␣␣␣␣␣␣␣␣childitem = os.path.join(dirname, item)

        ␣␣␣␣␣␣␣␣info = os.lstat(childitem)

        ␣␣␣␣␣␣␣␣if stat.S_ISREG(info.st_mode) :

        ␣␣␣␣␣␣␣␣␣␣␣␣... regular file ...

        ␣␣␣␣␣␣␣␣elif stat.S_ISDIR(info.st_mode) :

        ␣␣␣␣␣␣␣␣␣␣␣␣... subdirectory, do recursive traversal ...

        ␣␣␣␣␣␣␣␣#end if

        ␣␣␣␣#end for

        Notice the use of lstat, not stat, so it doesn’t follow the link. And also notice the explicit check for regular files and subdirectories, so it ignores everything else.

  6. jake Silver badge

    As a old fart, I always expect someone else's stupidity.

    Also as an old fart, I expect myself to make stupid mistakes, too (I'm only human!), and program accordingly.

    "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." —Albert Einstein (supposedly)

    "Apart from hydrogen, the most common thing in the universe is stupidity." —Harlan Ellison

    "There is more stupidity than hydrogen in the universe, and it has a longer shelf life." —Frank Zappa

    1. GrumpenKraut
      Boffin

      Heuristics, heuristics, and more heuristics.

      I tend to use heuristics: Is the thing running on the right host? Does $(pwd) match the prefix it should? Is $USER what it should be? etc. etc.

      Bail out with an informative message with anything dubious. Stuff like that has saved my arse so often!

      Admittedly, scripts have to be modified sometimes. But that little bit of dull work is preferable to data loss or corruption.

  7. aerogems Silver badge

    Hey

    At least it's better than the complete scuttling of a roughly $300m mission because someone forgot to convert between Imperial and Metric units.

    1. Anonymous Coward
      Anonymous Coward

      Re: Hey

      Nobody forgot to change between units. Everyone made the sensible assumption that everything was metric as per agency policy.

      It was the subcontractor that was stupid enough to be using steam-engine units without telling anyone. Or just simply being stupid enough to use a steam-engine system of measurement in the first place.

      1. aerogems Silver badge
        Facepalm

        Re: Hey

        Just what do you think failing to convert between units means? Because you just described it.

    2. Korev Silver badge
      Alien

      Re: Hey

      At least it's better than the complete scuttling of a roughly $300m mission because someone forgot to convert between Imperial and Metric units.

      The Register have solved this problem...

  8. Sceptic Tank Silver badge
    Facepalm

    So much free space

    Did this one on a Unisys A-Seried painframe. As far as I can remember there were precisely zero ways to prevent other users from accessing each others' files. There was a directory with subdirectories per user where everybody could store their personal scripts, text files, and other artifacts. One day I wanted to clean up my personal directory but I specified the top-level directory instead, and there went everybody's data. It went pretty quickly because there wasn't much in those directories – the week before somebody else had accidentally done exactly the same thing.

    1. Jou (Mxyzptlk) Silver badge

      Re: So much free space

      This sounds so much like cloud storage...

      1. Anonymous Coward
        Anonymous Coward

        Re: So much free space

        'Cloud' is basically 'mainframe', but further away.

  9. Sam not the Viking Silver badge
    Pint

    Check before Deleting

    As a newly-employed graduate I spent time in various design offices 'doing the rounds' as part of the ongoing training. These offices were highly technical and manned (including ladies) by experienced, skilled engineers who had been brought up using the slide-rule for numerical calculations. A very new, very expensive electronic desk-top calculator was brought in for those calculations that didn't need to go to the 'Computer Department' for longer, iterative processes. Although programmable, it had limited memory for recording the programme and it was quite common to write and run the calculation each time. The answer was shown on an LCD display. This desktop device was shared amongst an office of eight and usage was governed by gentlemanly enquiry.

    It was after lunch, a warm afternoon with the sun streaming in through the windows and several of the engineers were in deep, deep thought. Very deep. Eyes closed as they theorised or even dreamed of new mechanisms, slack jaws on chests in relaxation. Tony had even let his pipe go out. Rather than disturb these thinkers I used the calculator without checking...... When Tony's thoughts returned to earth he was most upset that the calculation he had been working on before lunch had been wiped out. He never forgave me for 'not checking before using'.

    It wasn't long before personal hand-held calculators became ubiquitous so the lose-programme problem didn't persist. Much later, air-conditioning reduced/stopped the afternoon 'thought reorganising'.

  10. theOtherJT Silver badge

    I may have done this...

    ...while cleaning up home directories.

    See no one on the compute nodes should have a home directory except the operators account. That one belongs to a service account that runs ansible jobs across the fleet. Everyone else's home directory should be a NFS mount to a storage box.

    Unfortunately there had been an incident where the NFS mount had failed and a few people had managed to log in anyway due to some oversights in pam_mount and then left a bunch of work on the local /home partition.

    Didn't take long to sort out, fortunately their workload was such that all we had to do was remount the shares elsewhere and rsync some working directories around to get everything back where it was. Then we clean up the redundant ones in /home, reboot the box, and everything will be back to normal. Right?

    Well... No. See, what I didn't do was disable all other user logins while I was cleaning up the mess, and someone didn't get the message in the group chat about "please don't log in here for half an hour while I clean this up.

    It was on this day that I learned of the wonderful -xdev argument to find and found out why I should have been using it as I completely nuked the unsuspecting user's remote home directory and had to restore it from last nights backup.

    1. Korev Silver badge
      Pint

      Re: I may have done this...

      > It was on this day that I learned of the wonderful -xdev argument to find and found out why I should have been using it

      I've just learnt something

    2. ldo Silver badge

      Re: where the NFS mount had failed

      Had this happen once or twice, too. My solution was to make the underlying mount-point directory read-only to everybody. That way, if there was no filesystem mounted there, they would get errors from not being able to create any files.

      1. theOtherJT Silver badge

        Re: where the NFS mount had failed

        Ah, see, this is where pam_mount came in, because that's running as root and it will create the directories for remote users when they log in so it has somewhere to do the mount, and then delete them again when they log out. Unfortunately I missed a trick further down the stack where they should have been automatically logged out again and given a "There's something wrong with this host, please report it to the IT team" message if the mount failed. Instead pam_mount created them a nice empty directory owned by them; opposed to what it should have done which is returned a failure to the next step in the chain.

      2. Peter Gathercole Silver badge

        Re: where the NFS mount had failed

        There used to be a rather strange behaviour in AT&T UNIX SVR2 (and possibly other versions) whereby descending down into a filesystem across it's mountpoint through it's top level directory would use the permissions of the TLD on the filesystem, but moving back up across the mount point would use the permissions of the directory that was mounted over.

        So, if the underlying directory had permissions of 0664 (drw-rw-r--) before the filesystem was mounted, but the top level directory was 0775 (drwxrwxr-x), once mounted, you could change directory into the directory at the top of the filesystem, but if you then did a "cd .." from the top level directory, it would give you a "permission denied" error. This actually created some problems with a few library routines that chased the directory structure back up to work out the fully qualified path of a file or directory.

        I actually had access to the source, and so I traced it through to work out what it was doing, and it was actually working as written. When I questioned my escalation path (I was working in AT&T at the time, but it was a long time ago), I was told it was working as designed. I think I was told a reason, but I really can't remember it now. As a result, to this day, I still make sure that the permissions on the mount point and the top level directory of a filesystem match, and give the desired permissions. I do not actually know whether this behaviour is still the case in either genetic UNIXes or Linux, but I do it anyway.

        1. Jou (Mxyzptlk) Silver badge

          Re: where the NFS mount had failed

          > So, if the underlying directory had permissions of 0664 (drw-rw-r--)

          Sounds somehow like a possible security issue if used the other way around: Where the mounted filesystem with its ".." could give you more access than the above directory should? Or am I wrong? I doubt your issue would appear on current *nix boxes, whereas your mid 1980s unix I would not be surprised.

          > working as designed.

          Yeah, it was designed to be flawed :D.

        2. ldo Silver badge

          Re: moving back up across the mount point

          Just tried creating a read-only directory to use as a mount point. Then mounted another filesystem there and gave ownership of its root to a nonprivileged user. As that user, did a cd to that directory, and verified that I could create a file in there. Then did a “cd ..”, which worked fine, as expected.

          I’m not sure how using the wrong permissions would work, anyway, given that the directory that the wrong permissions are coming from would be inaccessible.

          1. Peter Gathercole Silver badge

            Re: moving back up across the mount point

            But that is the point. Moving down into the filesystem, the permissions on the TLD allowed the access. But moving back up into the parent directory on another filesystem, which used the permissions on the underlying mount point which were not used when entering the filesystem, denied the movement (you need "x" on a directory to move through it, of course).

            As I said, it's a long time ago (more than 30 years), and my memory is more than a little hazy, but I'm pretty sure I remember the circumstances correctly.

            It's not worth trying to replicate it, although I probably could see whether Bell Labs UNIX Edition 7 on my PiDP11 demonstrates the problem.

            1. ldo Silver badge

              Re: used the permissions on the underlying mount point which were not used

              There is no conceivable scenario under which such permissions could have been relevant to anything. (Just tested on a mount point which gave no access to anybody; the mounted filesystem worked for a nonprivileged user just fine.)

  11. Anonymous Coward
    Anonymous Coward

    There's the old .* gotcha when using chown to recursively correct the ownership of directories, files, and hidden files in a user's home directory...

    I learnt of this the hard way when doing this at an ISP back in the mid 90's. Less than a minute to blat all the ownerships of 10,000 user home directories and contents, and 2 hours for my corrective script to complete to undo my oopsie.

    1. ldo Silver badge

      old .* gotcha

      The POSIXLY-correct wildcard to match hidden files/directories, but not those useless “.” and “..” entries, is “.[!.]*”.

      1. DJohnson

        Re: old .* gotcha

        HOW have I missed learning that? Thank you, it shall be used often!

      2. Peter Gathercole Silver badge

        Re: old .* gotcha

        I can understand that you may find "." a little useless, but ".." is vital. Without it, you would not be able to go up a directory in the directory structure in shell. It will be used under the covers in all manner of other situations (as I said in one of my other posts, in ksh, typing "pwd", actually chases all the way back up to the top of the root filesystem, one directory at a time, using "..", identifying the name of each directory as it goes [actually, that probably used the "." entry to get the inode number of the directory to be able to obtain it's name]).

        But thinking this through, even when in the shell, I use the "." entry quite frequently. If you don't have "." on your path (as you shouldn't for security reasons), you can run a script in your current directory by using ./script_name. I use that literally all the time!

        If you actually bother to dig in to how the UFS filesystem works, having entries that point up in the directory structure is vital to it's function by design. It's also worthwhile knowing how the link count shown on ls is affected by having sub-directories in a directory.

        Once you get away from UFS and other POSIX-complient filesystems, things may work differently, but the UNIX filesystem design was very influential for a number of different OS's filesystem design over the years.

        One interesting note. For a slightly obscure but quite well documented experimental distributed filesystem back in the early '80s called the "Newcastle Connection", also known as "Unix United", the developers invented another entry "...", normally in the TLD of "/", which allowed you to do something like "cd /.../machine/<path>", which took you up to the super-root of the network, then back down into the filesystem of another system on the network. It was interesting as it could be added to a system without any kernel modifications, just by replacing the C library which contains the stub code for the system calls, and linking your programs to that library.

        I remember going to the computer lab. in Clairmont Tower at Newcastle University where they were developing it, and seeing my contact there write to a tape on another machine just by specifying a path to it's tape drive entry in /dev through this mechanism (I also remember the complaints from other people in the lab. because the transfer dominated the Cambridge Ring network that was linking the systems together). It was like magic, and something that has only been possible with NFS since version 4, so much later (although AT&T RFS would allow something similar).

        1. ldo Silver badge

          Re: “..” is vital

          > Without it, you would not be able to go up a directory in the directory structure in shell.

          My point is that “.” and “..” can have their special interpretation built into the kernel’s pathname-parsing routines, rather than having the hack of burdening every directory with these redundant entries.

          First of all, “.” is redundant: the kernel pathname parser can directly interpret it as “stay in the same directory”.

          Secondly, “..” can also be interpreted specially in the kernel, because it keeps track of how you got to the current directory anyway. (Remember, a directory cannot be hard-linked from more than one parent.) And there is one case where it has to be interpreted specially, and that is where you are already at the root of one particular filesystem, and going up means crossing over into another. Again, the kernel has to keep track of that, so it knows how to do the interpretation. So why not just have it do it in all cases?

          1. Jou (Mxyzptlk) Silver badge

            Re: “..” is vital

            > Secondly, “..” can also be interpreted specially in the kernel

            TODAY... Go back a few years. DOS and quite a few others are known to save a few bytes of memory by not only not storing/caching the ".." directory entry, but following it as well in operation - and not just during a chkdsk.exe run. It is not like every DOS computer had 256 KB of memory for a little "Buffers=" entry in the config.sys

          2. Peter Gathercole Silver badge

            Re: “..” is vital

            I contend that they are not redundant in UFS, as they are a critical part of the design of UFS and derived filesystems. You could not remove them without fundamentally breaking UFS. What you say may be true for non-UFS derived systems, but that is another argument.

            It seems to me that if you didn't have the concept of a link to the directory above your current directory, you would have to keep a record of the full path to their current directory in all processes, because if you didn't, finding out where you are on a hierarchical filesystem may be pretty difficult without either the processes being made knowledgeable of the device your current directory is sitting on (together with being able to read information about that device from a user-land process), or scanning the entire directory tree whenever you need to know where you are.

            Of course these things can be hidden in the syscalls, or maybe in the path resolution code, but for the time, keeping links to the directory above was an elegant and simple solution to a problem that meant that you could make the code less complex.

            Another point at which ".." was useful (again, an archaic argument) is that it actually allows you to piece a filesystem back together more easily in the case of filesystem corruption. If you have a back pointer, if a directory gets unlinked from it's parent (for example by the parent directory file being corrupted/deleted), it becomes easier to look in the orphaned directory, get the inode number of the parent directory, and at least link it back in, even if you don't know the full name that it used to go by.

            Again, in these days of more robust filesystems, this type of repair is less likely to be needed.

            I know I'm talking like a dinosaur here, but you have to remember when this was invented, the UNIX kernel had to fit in under 56KB of memory, and a similar restriction existed for individual processes. And changing it now would break things, even if you did do as you suggest, change the path resolution code in the filesystem handling routines.

            To me, your arguments sound pretty petty. It's not a huge cost, and if you don't want to use it, you can happily ignore it.

            1. ldo Silver badge

              Re: “..” is vital

              > you would have to keep a record of the full path to their current directory in all processes

              Surely any reasonable OS already does that. Look at the “dentry” object in the Linux kernel, for example, and you see it has to have a pointer back to its parent.

              Those “.” and “..” entries are basically nuisances. They get in the way of directory-traversal routines, which have to put in special cases to ignore them. Nobody wants to look at them.

              1. Jou (Mxyzptlk) Silver badge

                Re: “..” is vital

                > Those “.” and “..” entries are basically nuisances.

                I beg to differ - strong. They are a part of consistency check. No matter what, you get filesystem errors somewhere in time. And you need to detect them, best in operation. So if "." does not correctly match so "self" or ".." does not correctly match to "where I came from" you know something is wrong. Modern filesystems have additional hints and tricks everywhere to avoid treating the middle of a .JPG as directory data just because the upper directory or a ".." pointed there and so on.

                We've come a long way with a lot of lessens learned over several decades on how to make robust data structures for a file system, which is at the same time flexible enough.

                1. ldo Silver badge

                  Re: They are a part of consistency check.

                  There are other, more fruitful things to do in a consistency check. Look at the source code of the relevant fsck utilities to see what I mean. You know how to look at source code, don’t you?

  12. Howard Sway Silver badge

    What a start to the week..........

    It's Monday, and what do I find here but tales of confusion and woe about inodes and tales of confusion and woe about symlinks, both of which have unearthed memories of stress inducing problems that I really wish had stayed buried forever. Can't we just ease into the week with silly arguments about which distro and/or desktop is best instead?

    1. ITS Retired
      Childcatcher

      Re: What a start to the week..........

      Ok. Linux mint, where the mouse doesn't wake up the computer. Each "new" kernel takes away that needed option.

  13. Joe Gurman

    I think this is the first time....

    ....that I'm pretty certain I know the who and where of a Who, Me? story.

    1. Jou (Mxyzptlk) Silver badge

      Re: I think this is the first time....

      Who: Many.

      Where: Lots of NASA/ESA places.

      When: Ever since NASA/ESA existed. Though the "2.5 petabyte" hints "after y2k".

      I doubt such thing only happens once.

  14. Bebu
    Windows

    dramatic foreshadowing

    in this case closer to a spoiler as the inevitable pratfall lacking only the fatal flaw. A symlink - who'd a thunk it?

    Users fusermount-ing dodgy file systems where they oughtn't has also caused a few tears before teatime.

    Bloody users - should be banned or at least the "clever" ones. The dumb one aren't clever enough and the smart ones are smart enough not to be clever or at least to ask beforehand.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like