back to article Make Linux safer… or die trying

Some Linux veterans are irritated by some of the new tech: Snap, Flatpak, Btrfs, ZFS, and so forth. Doesn't the old stuff work? Well, yes, it does – but not well enough. Why is Canonical pushing Snap so hard? Does Red Hat really need all these different versions of Fedora? Why are some distros experimenting with ZFS if its …

  1. elsergiovolador Silver badge

    Rhythm is a Dancer

    I'd accept snap if it played a few bars from Rhythm is a Dancer each time a package gets updated or installed.

    1. ske1fr
      Trollface

      Re: Rhythm is a Dancer

      I'd settle for a fingersnap!

      1. The Oncoming Scorn Silver badge
        Thumb Up

        Re: Rhythm is a Dancer

        How about two finger snaps, on Wednesday (Icon).

        1. Anonymous Coward
          Anonymous Coward

          Re: Rhythm is a Dancer

          .. which gives you access to a place that would in principle be a great place for a DC :)

    2. LionelB Silver badge

      Re: Rhythm is a Dancer

      No can do - it ain't got the Power.

    3. Hans Neeson-Bumpsadese Silver badge

      Re: Rhythm is a Dancer

      Presumably it would also play a few bars of "I've Got The Power" every time you did a 'sudo' or 'su root'

    4. JimboSmith Silver badge

      Re: Rhythm is a Dancer

      That article was worrying, NT is years old, which makes me feel ancient.

    5. Dave559 Silver badge

      Re: Rhythm is a Dancer

      Yeah, but it's serious as cancer [1], which is probably a good reason to avoid it…

      [1] I mean, just look at all those metastasising mount points…

      1. Youngone Silver badge

        Re: Rhythm is a Dancer

        Oh gods, all the mount points. So bloody many mount points.

    6. jake Silver badge

      Re: Rhythm is a Dancer

      "I'd accept snap if it played a few bars"

      So write a start-up script and make it so.

      ::shrugs::

    7. JoeCool Bronze badge

      Re: Rhythm is a Dancer

      Too early for a fork ?

  2. fg_swe Bronze badge

    Technology & Economics

    1.) The Linux kernel can be stripped much smaller than the WNT kernel, as the latter has graphics, font rendering and several networking stacks baked-in. Mind you: a single kernel exploit is Game Over.

    2.) Windows has automated a limited number of tasks with user-friendly GUIs. As soon as you need advanced things or if you want to automate/mass operations: same effort as Linux command line. Capable Windows Admins are not cheaper than Linux Admins, as they all must be semi-programmers using bash, perl, python or PowerShell.

    3.) The men running AWS or Google Cloud must be true experts, their economy comes from the scale of their operations.

    4.) If you want to see the future of OSs, look at minimalist microkernels:

    https://sel4.systems/

    http://sappeur.ddnss.de/L4gegenueberLinux.html

    (Maybe it is not fair to compare L4 to Linux at the moment, but in the future it could be like the picture)

    https://github.com/AmbiML/sparrow-manifest

    Like a warship, seL4 can take hits in base modules, and still be overall secure.

    1. theOtherJT Silver badge

      Re: Technology & Economics

      4.) If you want to see the future of OSs, look at minimalist microkernels:

      I wouldn't bet on it. That's the exact reasoning people gave back in the 90s when saying "Linux will never catch on" and yet here we are. We all should have learned by now that the future belongs to the most successful path not the technically most efficient / reliable / advanced one, and success is just as often determined by what is easy as what is good.

      1. fg_swe Bronze badge

        Well

        It is good to know how to systemically fix the challenges of Big Kernels.

      2. AndrueC Silver badge
        Happy

        Re: Technology & Economics

        We all should have learned by now that the future belongs to the most successful path not the technically most efficient / reliable / advanced one, and success is just as often determined by what is easy as what is good.

        As demonstrated by Windows ;)

        1. Anonymous Coward
          Anonymous Coward

          Re: Technology & Economics

          Not quite. Windows demonstrates that something shoddy that just about works can become top dog if you're prepared to bribe, bully and blackmail everyone in your way to get there.

          In that context it is a testament to the hardcore robustness of Linux, BSD and the supporting communities that despite that it still managed to steal Microsoft's lunch in a very substantial way.

          1. Snake Silver badge

            Re: "substantial"

            Really? A 2.5% desktop market penetration is "substantial" in your I-hate-MS eyes??

            MS really was never a major player for the server market, no matter how much propaganda and pushing they tried on the topic. Even in its heyday MS servers, in absolute numbers, were never a global force - Big Iron played the big roll, and Big Iron didn't run MS.

            Whilst MS did indeed do some shady business deals to ensure desktop dominance, proof that this was unnecessary paranoia on MS's part is the fact that MS was legally penalized for their actions, the deals rolled back...but look, MS Windows' desktop dominance is still here, decades later. I'm sick of telling this to Linux-heads because they're too thick to listen, but Windows' dominance has VERY little to do with "bullying", "bribes", "blackmail" and everything else Linux fans want to play victims of - Windows remains the #1 desktop OS because it's the apps, stupid.

            People don't use OS's, they use APPS. The OS is just the enabler. As Mac dominated the creative desktop for decades, because of applications that could leverage innate hardware & OS benefits such as integrated color space management and accurate sound support, Windows dominates the work desktop because it's the apps. When Linux can run Bloomberg Terminal, integrated industry business solutions such as POS, Quickbooks desktop, the Adobe suite, Avid Pro Tools and other major players in their respective industries...without playing silly games of VM's, questionable compatibility, and possible driver issues - THEN FOSS will be the major, dominant force on the desktop.

            Until then, FOSS penetration on the general desktop will always, always, remain minor.

            1. Anonymous Coward
              Anonymous Coward

              Re: "substantial"

              The problem for Microsoft is that any honest TCO study (read: one that includes wasted staff time due to insane volumes of patches and abysmal UI design as well as the cost of security risks) would no longer recommend a Windows desktop.

              It's a good thing for Microsoft that Apple would not be able to handle that amount of new systems to be bought.

              1. mdubash

                Re: "substantial"

                Windows wouod still dominate because you don't need to train ordinary users how to do the basics: they've trained themselves on their own time.

                1. Anonymous Coward
                  Anonymous Coward

                  Re: "substantial"

                  On a Mac they can do that too, with one major difference: they won't need to do that again and again every time the OS changes. The UX on Apple platforms is still one of the best, and has remained pretty consistent over years. Observe the current retraining hassle for companies having Win 11 rammed down their throat to see why that matters.

                  A decent UI saves a lot on the most expensive cost of a business: man hours. Apparently, Microsoft doesn't seem to care about that.

            2. Anonymous Coward
              Anonymous Coward

              Re: "substantial"

              I was actually referring to the server side of things. When it comes to being a server, it demonstrates very well that it's just a gaming OS that needs an awful lot of resources to be half as stable and secure as an out of the box Unix variant (any of them) is by default.

              I have seen the desperate attempts to make it scale where, for instance, utterly misguided attempts at government level were made to switch to MS Windows and services and it inevitably ended up in disaster because it simply cannot handle it. It's either awfully slow or unstable, or sometimes both. In any event, it's always a LOT more expensive, far less bang for the buck but hey, it's only the taxpayer's money and you get nice expensive dinners from Microsoft as well as the many blood sucking consultancies you have to feed to keep the idea alive as as to not being found out.

              There's a reason why the really big setups that crunch data avoid Windows like the plague.

              It's because it is.

            3. ovation1357

              Re: "substantial"

              "deals rolled back"

              When you, for example, embed the world's worst and least standards-compliant web browser into your OS and call it "The Internet" in an attempt to obliterate the competition, then get legally penalized for doing it. How exactly do you roll that 'deal' back?

            4. georgezilla Silver badge

              Re: "substantial"

              " ... Really? A 2.5% desktop market penetration is "substantial" ... "

              So the vast majority of EVERYTHING else isn't "substantial"?

              Just what planet do you live on?

              Because on this one, Linux IS the dominate OS.

              That's called reality. You should try living in it.

      3. jgarbo

        Re: Technology & Economics

        Yep. Evolution 101. Success in the next generation means "just good enough" not "superbly brilliant" (as devs would like); viz. Windows... ;-(

        1. Plest Silver badge

          Re: Technology & Economics

          "Survival of the fittest." annoys me as everyone misses one very important thing from that phrase, and it's the addition of "...for purpose.".

          Not strongest, not best, not biggest not even smartest, simply the best fit for purpose. And as it is in the animal kingdom, it is in IT, we get adequate for the job and that's all it will ever be.

      4. Ozan

        Re: Technology & Economics

        I heard about microkernels being future for 20 years now and 20 years ago, I read they were the future 20 years ago. Future is a OS let you do what you need to do. Does not matter micro or monolith kernels.

      5. RobLang

        Re: Technology & Economics

        Can confirm, parents were the Betamax King and Queen because my engineer Dad knew it was the better format. Little video rental round the corner had about 4 Betamax tapes and one of them was a ballet.

    2. Anonymous Coward
      Anonymous Coward

      Re: Technology & Economics

      3.) The men running AWS or Google Cloud must be true experts, their economy comes from the scale of their operations.

      That's rather misogynist!

      4.) If you want to see the future of OSs, look at minimalist microkernels:

      HURD has entered the chat,,,

      1. fg_swe Bronze badge

        Men

        I was told that "man" and "men" has always been used as "Mensch" in German. If that offends COMINTERN tools, even better.

        1. LionelB Silver badge

          Re: Men

          Wouldn't that translate to "the humans... "? Which in the current context may be debatable - or perhaps the Yiddish "mensch" arguably even more so.

          1. EarthDog

            Re: Men

            +1 human

            1. jgard

              Re: Men

              I'm currently contracting for a very large financial institution. They avoid the delicate problem around man/woman usage in their technical specs by referring to users as 'human entities'. Seriously.

              However, it does raise the exciting possibility that some users are entities of a non-human persuasion. Not seen any yet though.

              1. EarthDog

                Re: Men

                what happened to just user? "Human" also works. Aliens could be human as well, just not terran.

        2. jake Silver badge

          Re: Men

          Have you been told that ELReg is a RedTop, and that the commentardariat is full of trolls of one persuasion or another?

          Are you aware that some trolls exist to troll other trolls?

          1. LionelB Silver badge

            Re: Men

            That can get pretty circular too: to quote the late great Jim Lahey: "Shit moves in circles. 360 degrees shit circles. When the shit comes 'round again, I'll be ready, Julian."

          2. TimMaher Silver badge
            Coat

            Re: Trolls

            In fact, there are so many trolls that you need a trolley to pick them up and shove them into the car park.

        3. sebacoustic
          FAIL

          Re: Men

          That's pretty misleading... yes the word "man" means something like "someone" or "anyone" and it's used a lot, and regardless of gender.

          But it's not the word for a male: that's "Mann" in German.

          The words sound an look similar but they don't even share etymological roots.

          1. sebacoustic

            Re: Men

            That's pretty misleading: yes, "man" means something like "anyone" or "someone" in german, regardless of gender, but it's not the word for a man (i.e. a male) which is "Mann".

            The words sound similar but they are not the same, they don't even share etymological roots.

            Nicht zu sehen hier, bitte gehen sie weiter.

        4. John Brown (no body) Silver badge

          Re: Men

          I remember a phrase from about 40-50 years ago[*]. Women are Men with a bit extra in front :-)

          So, "women" covers approx 50% of the population whereas "men" includes 100% of the population.

          * No idea where I heard it, might have been a comedy sketch or anything.

          1. Ignazio

            Re: Men

            BS. People, engineers, or any other more specific word to describe what these people do would have been the right choice. Climbing plate glass to try and pretend "men" wasn't just a 1952 stance a few decades past its expiry date is just passé.

            1. John Brown (no body) Silver badge

              Re: Men

              Oh dear. Did I forget the joke icon? Wo + men = Women was the point of the joke.

      2. Anonymous Coward
        Anonymous Coward

        @AC - Re: Technology & Economics

        Pardon us. From now on, we'll use comrades instead. Like in "the comrades running AWS or Google".

        Satisfied ?

        1. Anonymous Coward
          Anonymous Coward

          Re: @AC - Technology & Economics

          Could just use the word "people". Unless you're paying peanuts.

          1. jake Silver badge

            Re: @AC - Technology & Economics

            Except "man" (from the Old English Mann) means all humans, men, women and children.

            The word "wer" means the adult male human (as in "werewolf" ... sorry kids, no such thing as werewomen, not in that context anyway!) ... and as a side-note, werguild is not the same thing as a Eunich, the plural of which is not UNIX, just to bring us back on topic.

            1. Tim99 Silver badge

              Re: @AC - Technology & Economics

              Women were wyfmen or wommen. Wyf originally meant adult female. A housewyf was a woman who ran a house (often as the "wife" of the owner). It also became a common word for female servant. Much of that changed after the Norman invasion, when generally women had fewer intrinsic rights to property. As an aside, "world" is likely to have been gendered. It comes from the root weorold/werold; and meant the "affairs of men" - Possibly that things outside the household were the business of men (gendered), whilst those of the wyfman were within the household.

              1. fg_swe Bronze badge

                Wife

                Also see "Weib" in German, which is very close to Wife. For some reason "Weib" is now derogatory here (Suebi land), but that is a very recent development. So we now use "Frau".

                "weiblich" still means "female" and is not derogatory. Well, maybe already on the left coast, they apparently want to cancel Mutter/Mother.

                I do think we use too many latin terms already(in English much worse than in German) and that is why I prefer "men" over "human".

              2. EarthDog

                Re: @AC - Technology & Economics

                In German "welt" vs. "umwelt", in English "The wide world" vs. the world as in the planet.

            2. Evil Auditor Silver badge
              Thumb Up

              Re: @AC - Technology & Economics

              Well, I'll do my best to divert to off topic again regarding "wyfwolfs" (werewomen would be, maybe, someone in the gender fluid spectrum?). It reminds me of a rather popular fantasy film from around 2001 where a human woman asked whether there were no dwarf-women. To which the answer was: of course there are, but you can't distinguish them from the bearded dwarf-men. I assume it's similar with wyfwolfs.

          2. LionelB Silver badge

            Re: @AC - Technology & Economics

            > Unless you're paying peanuts.

            Then it would have to be "monkeys", surely? I doubt you want to go there.

        2. Bebu Silver badge

          Re: @AC - Technology & Economics

          "...we'll use comrades instead. Like in "the comrades running AWS or Google".

          The people collective of AWS or Google? Pigs might fly. I think serf or slave of these respective fiefdoms would be closer to the truth.

          I suspect I wouldn't be the first to imagine that the 13th ammendment (+14,15) didn't abolish just industrialized.

          1. Claptrap314 Silver badge

            Re: @AC - Technology & Economics

            You clearly have never seen a paycheck from one of these places.

            Don't confuse the Amazon warehouse with the AWS data barn. COMPLETELY different worlds.

      3. An_Old_Dog Silver badge

        The HURD is lost

        HURD is a project which has been marching through Purgatory since it began.

        "Current Status: The latest releases are GNU Hurd 0.9, GNU Mach 1.8, GNU MIG 1.8, 2016-12-18"

        It looks interesting, but given its lack of critical developer mass, it's not something I'd want to use as my main OS.

        1. jake Silver badge

          Re: The HURD is lost

          There is nothing inherently wrong with HURD, in fact I quite like it and have contributed in the past.

          But the BSDs got the jump on it. I like the BSDs, too, and also contribute there (and have been since before it was BSD).

          And of course Linux got the jump on the BSDs. Fortunately for me, I also like Linux. And so I contribute.

          The Windows part of the equation is more about marketing and sales than technology, alas. Fortunately, one doesn't have to contribute to every project.

      4. jgarbo

        Re: Technology & Economics

        Not misogynist if a fact. You want misogyny? "Thankfully no useless women, only brilliant men running ...."

    3. Roland6 Silver badge

      Re: Technology & Economics

      >"http://sappeur.ddnss.de/L4gegenueberLinux.html

      (Maybe it is not fair to compare L4 to Linux at the moment, but in the future it could be like the picture)"

      Yes seL4 will look more like the top picture once it is packaged into a distribution.

      A more helpful comparison would be to Mach and XNU.

  3. ske1fr

    The companies that bought them – and it was a big-business level of expenditure – could afford to pay for highly trained specialist staff to tend and nurture those machines.

    Or entities, shall we say, could use their existing staff with an aptitude, hence I got to feed a Siemens Nixdorf upright freezer-sized box with Sony tapes, reset luser's passwords and stuff in a Framed-Access Command Environment front end, and gradually learn some more stuff like vi and cpio for restoring luser's oops-I-deleted-this-file-can-you-restore-it-from-backup cockups. NT servers were prettier (but carrying too many DLTs up and down stairs can really knacker your thumbs, kids), but any form of Windows never really floated my boat. And then I saw Knoppix, and saw what a POS XP was, and that was that.

    1. Bebu Silver badge

      FMLI?

      "Framed-Access Command Environment"

      I had forgotten FMLI and all that SysV? stuff.

      Most of the Unixes had text sysadmin user interfaces (tui) SAM in HPUX?, AIX etc. of various degrees of ghastliness. Not that more recent efforts are much better.

      I recently created a VM of the latest OpenIndiana release just out morbid curiosity (or nostalgia :)

      If I am absolutely honest it really is a better real world proposition for a solid coherent OS than most/all Linux distros and I would also say would still have an edge over the BSDs. Absolute heresy I know. Not any risk this side of doomsday of any Solaris derivative getting any traction given its ultimate ownership.

      1. Alistair
        Windows

        Re: FMLI?

        Dear God in Heaven.

        I have to throw a dead fish at you for mentioning SAM. That disaster was one of the reasons I learned perl.

      2. coredump

        Re: FMLI?

        Hah, SMIT / smitty on AIX brings back ... memories. IIRC I used some F? key with smitty for a bit to figure out what it was really doing behind the scenes, and script it elsewhere. That was a nice feature, at least. But I mostly had a meh-hate relationship overall with AIX.

        Didn't love HP-UX either, SAM didn't help.

  4. John H Woods Silver badge
    Headmaster

    Pedantic note

    ZFS isn't really "new tech", it's over 15 years old. Btrfs is only slightly younger, but vastly more exciting as you never know whether your data is really safe or not. /ZFS fanboy

    1. Anonymous Coward
      Anonymous Coward

      Re: Pedantic note

      No data is "safe" unless you have backups.

      Untested backups are not backups.

      1. John H Woods Silver badge

        Re: Pedantic note

        Sure; ZFS is more about availability than recoverability. Although some parts of it do enhance the latter (eg ability to send snapshots between non colocated servers), data safety is, as you say, all about tested (off-site) backups.

        Being a ZFS aficionado doesn't mean I don't respect the 3 2 1 rule...

      2. This post has been deleted by its author

    2. b0llchit Silver badge
      Devil

      Re: Pedantic note

      Well, ZFS fails miserably, just like btrfs, when flames destroy the drives. This will always end in tears.

      Most other filesystems are less robust and, as a consequence, the sysadmin(s) will take regular backups and simply restore after the burn.

      /me, the sysadmin with some experience restoring all those carbonized drives

      1. katrinab Silver badge

        Re: Pedantic note

        The difference with zfs is that if your backup completes without errors, you know[*] you have a perfect copy of the original data on it.

        That is not the case with other file-systems.

        [*] T&C apply, Only applies if your backup software actually attempts to copy all the data, etc.

        However, you can be absolutely sure your backup software won't have written any wrong data due to filesystem corruption.

        1. the spectacularly refined chap

          Re: Pedantic note

          However, you can be absolutely sure your backup software won't have written any wrong data due to filesystem corruption.

          Not really, ZFS is a large, complex chunk of code and has had and will have bugs that render an entire pool bricked. Sure it'll pick up media errors but it can't protect against those of its own making. It doesn't matter how many snapshots you have to roll back to when they are all unusable.

          Not a theoretical concern, it has happened many times and will happen again. Used to its strengths, yes it gives several additional layers of protection, but blind faith in its capabilities is asking for trouble.

        2. Alistair
          Windows

          Re: Pedantic note

          I detect someone with extensive experience with NotBackUp.

    3. JoeCool Bronze badge

      Re: Pedantic note

      I think "new" might be referencing ZFS's presence in the kernel.

  5. Anonymous Coward
    Anonymous Coward

    Snap is terrible.

    Slow to install, slow to start, proprietary backend, and italso break installs due to a fundamental limitation on how it handles home folders.

    It is a problem looking for another problem to make even worse.

    1. LionelB Silver badge

      Yeah - the home folder thing is a show-stopper as far as I'm concerned.

    2. herman
      FAIL

      Snap rhymes with crap

      Hmm, snap is always on my uninstall list.

  6. Anonymous Coward
    Anonymous Coward

    "making them able to fetch and install their own updates"

    I don't quite understand this. You wouldn't let Windows do this so why would you want Linux to? On my home machine I run enterprise for this exact reason. Server I run Debian and chose when and how I update. In both environments I think it's always best to check said updates beforehand. Like windows there can be many moving parts to an OS and while an update may resolve one problem there is always the worry it's going to create one.

    1. YetAnotherXyzzy

      I agree with you... on my own boxes. On my technophobic wife's box, the alternatives are:

      1. Try to teach her to do as I do. Ha ha, that's not going to work.

      2. Tell her to always blindly accept the "updates are available" prompt. Which she rarely notices, so security patches go unapplied.

      3. Set up autoupdates for her.

      What you describe is the gold standard, but not all computers are administered by folks who agree. Let's autoupdate those boxes, without taking away the ability for you and I to choose.

      1. jake Silver badge

        Forgot one.

        4) When updates are available (cron is your friend), update the computer that runs the same software as [Wife, DearOldMum, GreatAunt, siblings, sprog, etc.]'s boxen. Check that the update(s) work as advertised on the one local box. Then reach across the network and tell their computer(s) to perform said upgrade. Script most of the operation.

        Works for me and mine. The version of Slackware that I built specifically for this makes it almost laughably easy.

        1. Gorbachov

          Re: Forgot one.

          "The version of Slackware that I built specifically for this"

          is orthogonal to

          "makes it almost laughably easy."

  7. mmccul

    The problem is desktop components on servers

    The recent trend on Linux in my experience is that an OS in theory aimed at a server comes with so many mobile end user system components, some of which are even harder to strip out than ever before that I feel like I'm running a laptop, not a server. I've done the exercise many times of sit down and justify every package installed or remove it on a few Linux distributions, and often end up stripping at least fifteen daemons, some of which are network related, that I couldn't justify ever existing on a server. (Yes, said systems ran in production for years in various functions without needing said packages re-installed.)

    More recent trends in Linux only accelerate this tendency to treat the entire OS as a laptop, to the point that I've argued the people making the decisions for some Linux distributions are only using it on their personal laptop and think no one ever uses the OS on a server.

    1. Arbuthnot the Magnificent

      Re: The problem is desktop components on servers

      "...I've argued the people making the decisions for some Linux distributions are only using it on their personal laptop"

      You don't have to skirt around it, you can just say "Poettering", I don't think it summons him...

      1. katrinab Silver badge
        Thumb Up

        Re: The problem is desktop components on servers

        It is absolutely true that on my hardware, Debian boots in 1 second, FreeBSD boots in 10 seconds, not including POST time.

        But while 1 second is a significantly better number than 10 seconds, it doesn't really factor in which operating system I choose to deploy on my server. If it was 20 minutes, like Windows Server 2003 on period-appropriate hardware back when it was current, then I might be a bit more interested.

        1. FatGerman

          Re: The problem is desktop components on servers

          >> But while 1 second is a significantly better number than 10 seconds

          Better? Smaller maybe, but does it matter? What are you going to do with those 9 seconds you saved?

          I don't actually understand the hate for systemd, it seems to work fine here and I find unit files a damn sight easier to write than the old init scripts. But I do agree that it does seem like a solution to a problem that didn't really exist.

          1. Anonymous Coward
            Anonymous Coward

            Re: The problem is desktop components on servers

            > I don't actually understand the hate for systemd, ... But I do agree that it does seem like a solution to a problem that didn't really exist.

            That alone is enough.

            The ongoing (and equally un-asked-for) feature creep is compounding the initial mistake.

        2. Steve Davies 3 Silver badge

          Re: Server boot time

          How much of the year is spent rebooting a Linux Server... Come on now, what percentage is it? 0.000001% perhaps?

          Unless it has crashed at a peak time 10 seconds or 1 minute does not make a hapeth of difference in the grand scheme of things.

          That's why I've never really understood this fascination with boot times for servers.

          I just want it to boot properly every time I need it too.

          My own website is approaching 300 days of uptime. I will swap it over to the backup server over Easter just so that I can take the box apart and remove the accumulated dust and detritus from inside.

          Barring a hardware failure (which is why the database is backed up to the backup server 4 times a day) it will function very well for years.

          The only time an out of band reboot is done is when there is a zero day kernel vulnerability that needs to be patched.

          Properly configured server run for months and months. But none of those beat the old VMS Cluster that I used to run. The cluster had not failed in more than 15 years. Nodes could come and go from the cluster with ease. All part of the design that was introduced in 1983. It is a pity that Linux has never really got that sort of thing going.

          1. Arbuthnot the Magnificent

            Re: Server boot time

            I work on an HPC cluster with hundreds of modern high-spec servers, running bare-metal Centos. Total boot time is probably 30 or 40 seconds, but it's irrelevant because they take 10 minutes to flipping POST anyway!

      2. An_Old_Dog Silver badge
        Joke

        Re: The problem is desktop components on servers

        Oh, but it does summon him. The problem is we don't know the magic phrase with which to BIND him.

      3. Ian Mason

        Re: The problem is desktop components on servers

        You don't have to skirt around it, you can just say "Poettering", I don't think it summons him...

        And even if it does, we'll just give him a good kicking...

      4. Anonymous Coward
        Anonymous Coward

        Re: The problem is desktop components on servers

        Only if you say it 3 times.

        Though you sometimes do get his disciples even after once.

        1. Bebu Silver badge

          Re: The problem is desktop components on servers

          "And even if it does, we'll just give him a good kicking..."

          《Though you sometimes do get his disciples even after once.》

          So you don't get the organ grinder the above remedy can be satisfactorily applied to the monkey.

          Actually I don't particularly mind systemd - its a totally insane reproduction of some of the nearly forgotten migraines from Solaris - but the bits I use work well enough. Passing ambient capabilities (eg cap_net_bind_service) to processes when dropping privilege (saves fiddling with file system capabilities and permissions) and with chroot()ing and private /tmp makes a sysadmin's life a bit easier.

          1. Anonymous Coward
            Anonymous Coward

            Re: The problem is desktop components on servers

            > [systemD is] a totally insane reproduction of some of the nearly forgotten migraines from Solaris

            I assume you're referring to SMF. The same bit which (circa Solaris 10 iirc) would sometimes present us with a login prompt, even though ypbind hadn't done its business yet, resulting in multiple failed logins until it got sorted. Probably other dependency tree or race condition foolishness I've forgotten or blocked as painful memories.

            So yes, systemD and SMF both have some broken implementations. No idea if SMF has continued creeping featurism in Solaris as systemD has done to Linux.

            > but the bits I use work well enough.

            And here we part company. I've seen systemD trip over itself enough times, waiting to shutdown, with some pointless timer countdown because systemD's dependency tree has rotted itself, sometimes literally minutes.

            Perhaps in the (vain, failed) pursuit of "fast startup" they managed to bungle shutdown on the other end. One is as important as the other.

            Presumably it all worked flawlessly on Lennart's laptop ....

            1. Joe W Silver badge
              Pint

              Re: The problem is desktop components on servers

              ypbind.... or yp$stuff in general.... *shudders* you had to remind me.

              need a drink ---->

              1. Anonymous Coward
                Anonymous Coward

                Re: The problem is desktop components on servers

                It was admittedly long ago. Approximately decades.

                Say what you will about NIS, and I've probably said quite a bit myself over the years, at least it was pretty simple. Too bad snatching password hashes out of it was also pretty simple.

                If you want to talk about needing a drink, that'd be more like LDAP. Especially if sssd is in the picture. It works, but I never found it to be simple.

          2. Down not across

            Re: The problem is desktop components on servers

            Actually I don't particularly mind systemd - its a totally insane reproduction of some of the nearly forgotten migraines from Solaris - but the bits I use work well enough.

            The difference is that SMF in Solaris actually works (has some niggles in its early days) and quite well and it doesn't attempt to embed itself everwhere and spread like the cancer that systemd is. Solaris is also still quiet happy with normal init script if you rather didn't write manifest for SMF.

      5. Tim99 Silver badge

        Re: The problem is desktop components on servers

        I'm old and possibly senile. and like many older contributors, mostly used *NIX on expensive servers, minis and specialised systems. Before NT "Microsoft had the highest-volume AT&T Unix license": Bill Gates- Microsoft's Xenix (Wikipedia). We shipped a number of systems using it, mainly because it was reliable and ran well on cheap generic boxes.

        The Proliferation of Poettering is one reason that I suspect that the premise it's a fact: Linux is a Unix now. In fact, arguably, today Linux is Unix... ...To get the ready-to-use version, though, you have to buy a support contract is not necessarily desirable. Any Unix that had the obfuscation of systemd would have died "back then" as not conforming to the basic "Unix philosophy". My earned cynicism suggested to me that systemd was a cunning plan that was not in users' best interests. I wrote a "Troll" El Reg comment nearly 5 years ago: How can we make money? I still believe that it was closer to a truth than Liam might be comfortable with...

        1. John Brown (no body) Silver badge

          Re: The problem is desktop components on servers

          " I wrote a "Troll" El Reg comment nearly 5 years ago:"

          Which I just read and upvoted. Yes, we can upvote really old posts too :-)

      6. Anonymous Coward
        Anonymous Coward

        Re: The problem is desktop components on servers

        You don't have to skirt around it, you can just say "Poettering", I don't think it summons him...

        I had to laugh at that, fantastic.

      7. This post has been deleted by its author

    2. YetAnotherXyzzy

      Re: The problem is desktop components on servers

      That depends on the distro. If you preferred distro isn't giving you an installation option that doesn't leave you with all that laptop nonsense, then it's time to try another distro.

      1. jake Silver badge

        Re: The problem is desktop components on servers

        Not really. Any sysadmin worth his salt knows that a completely custom, fully targeted installation is the only way to go for a server. It doesn't really matter which distro you start with, what matters is the distro that you actually wind up using. Only an idiot installs Ubuntu, Debian, RedHat or SUSE off a kitchensinkware DVD and calls it a server.

        That said, some software distributions are easier to customize than others. I personally prefer BSD on the servers ... but Slackware works nicely, too.

    3. Anonymous Coward
      Anonymous Coward

      Re: The problem is desktop components on servers

      wait till you realise that unless you are very awake, esl-erlang latest does a full install of GNOME on your freshly provisioned server.

      1. jake Silver badge

        Re: The problem is desktop components on servers

        At least nobody in their right mind would put esl-erlang on a production server, very awake or otherwise.

        1. Anonymous Coward
          Anonymous Coward

          Re: The problem is desktop components on servers

          esl-erlang is used by rabbitmq. Most definitely a server component if ever there was one.

          I believe there is *now* a GNOMEless flavour. But that's too late for anyone who woke up to GUI appearing overnight on their server.

  8. alain williams Silver badge

    Unix was always diverse

    Because it was open (specifications more important than code) it has always been possible to replace components. So people did. Sometimes the replacements improved things, sometimes they did not.

    So there was diversity and experimentation. In a Darwinian way the better alternatives usually** won out after several years, so Unix systems gradually evolved to use better components. The same is happening today but without the benefit of hindsight today's diversity just looks like a mess. In a few years time what is considered a mess will be something different.

    ** "Usually" - large company marketing and techie conservatism sometimes meant staying-with/adopting non best solutions.

    1. fg_swe Bronze badge

      Still Is

      MacOS, iOS, Android, FreeBSD, OpenBSD.

      If Linus turns nuts tomorrow, we will simply switch to them.

      1. Dan 55 Silver badge

        Re: Still Is

        Android is not Linux, it's the kernel with a lot of things bolted on to ensure you can't get at your own data. You know, that stuff your apps save in /data/data that is not readable by you, the device owner, and can't be backed up by adb backup as adb backup has been allowed to rot. Google's cloud transfer does work though. Fancy that.

        MacOS... well, it's allowed to call itself a UNIX™ because Apple paid money for that.

        iOS, are you kidding? What next, your Samsung TV is also a UNIX?

        1. fg_swe Bronze badge

          iOS

          Indeed iOS is a golden cage, but if you can spring it free (as some people apparently did in the past), it is very much a "little" Unix machine.

          Eg. these Unix APPs

          https://apps.apple.com/us/app/ftpmanager-ftp-sftp-client/id525959186

          https://apps.apple.com/de/app/ftp-server/id346724641

          1. Dan 55 Silver badge

            Re: iOS

            If that's all it takes to have a UNIX machine then Windows is too. As is DOS. And CP/M. And AmigaOS. And RISCOS. And the ZX Spectrum.

        2. Anonymous Coward
          Anonymous Coward

          Re: Google's cloud transfer does work though.

          Does it fuck.

          Despite every attempt to ensure my phone was backup up to Google via the accounts->backup feature (it confirmed I was all backed up), when I reset the phone, and logged back into my Google account it had no record of a backup to restore.

          That was a fucking 2 hour piece of work I had to do. And that's *with* the Samsung transfer tool I used to take a backup (because I didn't and never will trust Google).

          Reading forums, the response was "it happens. Get over it".

          Trying to sign in 20 apps with complex passwords on a phone screen is not fun.

      2. jake Silver badge

        Re: Still Is

        No. If Linus goes TITSUP[0] tomorrow, Linux will carry on.

        Look up "What if Linus gets hit by a bus?".

        [0] Torvalds Inconveniently Totally Stops User Processes

        1. Benegesserict Cumbersomberbatch Silver badge
          Coat

          Re: Still Is

          What if Linus gets hit by a bus?

          You don't have to worry about that. Torvalds intends to step under (a) plane.

    2. TVU Silver badge

      Re: Unix was always diverse

      "Unix was always diverse"

      I fully agree there and what really did it for the commercial Unices was the huge and extortionate licence and royalty fees that came with them. As soon as the free and open source upstart Linux cousin came along, that marked the end of Unix domination and if they were creatures, they'd have been put on the risk of extinction list now.

      1. Doctor Syntax Silver badge

        Re: Unix was always diverse

        "what really did it for the commercial Unices was the huge and extortionate licence and royalty fees that came with them"

        Yes. If SCO had realised the possibilities of the mass market and set their prices accordingly its likely that neither Windows nor Linux would have got any hold on servers. There were a lot of businesses running on PC-architecture with SCO and some industry-specific application. They didn't need an in-house admin. I had a few of those under my wing and even taken together then they weren't my main customers.

        SCO's window of opportunity lay before Linux was sufficiently polished to use in production and package vendors realised it was worth porting to. They missed it and then doubled down on that with their litigation.

        1. TVU Silver badge

          Re: Unix was always diverse

          "If SCO had realised the possibilities of the mass market and set their prices accordingly its likely that neither Windows nor Linux would have got any hold on servers"

          Indeed, and they were the masters of their own misfortune there. There is a good account of those bad old days in Stephen Shankland's Fact and Fiction in the Microsoft-SCO Relationship article:

          https://www.cnet.com/tech/tech-industry/fact-and-fiction-in-the-microsoft-sco-relationship/

          I want the next Bond movie to feature a villainous corporate entity called "SCO Corporation".

        2. jake Silver badge
          Pint

          Re: Unix was always diverse

          "They missed it and then doubled down on that with their litigation."

          Lest anyone misunderstand, the SCO group which built "SCO the OS" were NOT the same shysters involved in the litigation.

          Welcome home, DrS. Have a beer.

        3. Roland6 Silver badge

          Re: Unix was always diverse

          >If SCO had realised the possibilities of the mass market and set their prices accordingly its likely that neither Windows nor Linux would have got any hold on servers.

          Or the desktop...

        4. Liam Proven (Written by Reg staff) Silver badge

          Re: Unix was always diverse

          [Author here]

          > If SCO had realised the possibilities of the mass market and set their prices accordingly

          I disagree.

          It is not about _level_ of pricing. If that argument were the case, then Coherent would have been huge. It wasn't.

          There is a vast difference between "cheap" and "free".

          A cheap OS is not cheap any more once you're spawning thousands of VMs for a spike in demand. It doesn't matter if the price is 0.01¢, you need a whole other level of infrastructure for all those licences. If something costs money and belongs to someone else, you can't build products around it, because then you have to license it and pay an external company for every box you ship.

          It's not that SCO was expensive, although it was. Other x86 Unixes weren't.

          It's that FOSS enables whole types of deployment that are just not feasible with commercial OSes and proprietary code.

          I would further argue that the type of the licence matters. The BSD licence has been a liability to the BSDs, not a win: it's enabled companies to take advantage of various BSDs without contributing anything back, and it's encouraged lots of forks which have divided the small number of programmers capable enough to make significant improvements.

          The reason certain business people hated the "cancerous" GPL is because of its infectious nature. This is, for example, why I think Oracle won't put ZFS under the GPL until there is not a single residual cent of profit to be made from it.

          This is why MS may _say_ it loves Linux, but Windows still doesn't dual-boot cleanly with it, and with the rise of UEFI and Secure Boot and TPM chips, it's getting harder and harder for an all-FOSS OS to interoperate cleanly with increasingly sealed-down PCs.

          So, no, I disagree.

          Secondly, but importantly:

          > They missed it and then doubled down on that with their litigation.

          Remember: the SCO that did all the litigation *is not the same* "SCO" that made Xenix.

          SCO № 1 was the Santa Cruz Organization.

          That SCO might have temporarily sold more copies, but probably made less profit.

          But it nearly went broke as it was, and was bought, along with DR and others, by Novell offshoot Caldera. Caldera _was_ OK and made a good Linux distro.

          Caldera then renamed itself "The SCO Group".

          (It also spun off DR-DOS as Lineo, later Devicelogics, and Unixware plus SCO UNIX as what's now Xinuos.)

          The SCO Group != the Santa Cruz Organization. Different companies.

          S.C.O. good.

          SCO Group bad.

          1. Roland6 Silver badge

            Re: Unix was always diverse

            >A cheap OS is not cheap any more once you're spawning thousands of VMs for a spike in demand.

            Spawning thousands of VMs isn't without cost, plus as we learnt decades back from AT&T circa 50% of the service cost was for the system necessary to generate the bill...

            >It's that FOSS enables whole types of deployment that are just not feasible with commercial OSes and proprietary code.

            I think you will need to evidence that.

            >I would further argue that the type of the licence matters.

            Agree, although much depends on the way the product is locked down. Windows before activation where an activation/licence key wasn't necessary and post-activation where a licence key (and more - as per all versions since XP) was necessary. This also contributes to your claim above.

            I didn't have to install SCO Unix in the 1980s so don't know how it was activated, but for an early 1990s project it was a headache, we could have done without, having to track licence/activation keys for the various SCO products we were using for a large-scale deployment.

            >SCO № 1 was the Santa Cruz Organization.

            SCO No. 1 was the Santa Cruz Operation.

      2. jake Silver badge

        Re: Unix was always diverse

        "As soon as the free and open source upstart Linux cousin came along"

        Well, to be fair NET/2 (based on 4.3BSD) was released in June of '91, a couple months before Linux.

        Mark Williams Group's Coherent (1980 on PDP-11, '83 on PC clones) had already brought the cost down to where mere mortals could afford to run a good *nix OS.

        Throw Minix into the mix (1987), and it becomes clear that a free UNIX was pretty much inevitable.

        Interesting times, especially for those of us who were *nix agnostic and wanted to run it at home.

        1. Down not across
          Pint

          Re: Unix was always diverse

          +1 for Coherent. It was dirt cheap. So I bought a copy and was shocked how good it was. -->

          Ended up using it as kind of dev/test enviroment for some projects that ultimately were destined for Ultrix on DEC and Convergent Mini/MegaFrame.

      3. jake Silver badge

        Re: Unix was always diverse

        "what really did it for the commercial Unices was the huge and extortionate licence and royalty fees that came with them."

        Mark Williams Coherent was about a hundred bucks per seat. It worked nicely and was FAST (being written in assembler), had no AT&T code in it, and would have done better if they had added better networking earlier. Lost opportunity ... but most people didn't realize where networking was going in the early '80s.

        1. Roland6 Silver badge

          Re: Unix was always diverse

          Whilst I tend to agree, I think we under estimate the role the "Radio Shack" hobbyist mindset had. CP/M and PC-DOS/MS-DOS were very simple and came a couple of floppy disks and the manual wasn't too much for hobbyists to find their way around. I think Steve Jobs did a good job with the Mac in creating a box with a (by the standards of the time) powerful desktop operating system that was easy for hobbyists to find their way around.

          1980s Unix in whatever form wasn't really an OS for the uninitiated.

          Personally, I would have liked to have seen DEC use the opportunity they had and priced matched a desktop Vax with VMS pre-installed to a PC with MS-DOS, that would probably have stopped the rise of Microsoft...

          1. Down not across

            Re: Unix was always diverse

            Yes and no.

            Radio-Shack did have CoCo 3 that ran OS-9 (all CoCos had 6809 CPU) and wasn't that expensive. Sadly not readily available on this side of the pond.

            1. Roland6 Silver badge

              Re: Unix was always diverse

              Sorry, I was using Radio Shack as a form of shorthand to try and convey the skill and interest level of the market the early PCs were aiming at.

              There were a lot of technically minded (but not necessarily trained) people who could get their heads around CP/M et al and so help their colleagues who were happy to regard the PC as a super typewriter.

          2. Liam Proven (Written by Reg staff) Silver badge

            Re: Unix was always diverse

            [Author here]

            > a desktop Vax with VMS pre-installed

            I disagree, much as I might have liked such a beast. (I point to the VAXstation 4000VLC, of which I own 3.)

            It's very hard to pivot from a low-volume/high-price model to a mass-market/low-profit model.

            What killed DEC was 2 big management mistakes.

            [1] It nearly bankrupted itself trying to make a mainframe-class VAX, the VAX 9000:

            https://en.wikipedia.org/wiki/VAX_9000

            It thought ECL-class discrete logic could outcompete microprocessors. It was badly wrong.

            [2] It cancelled the PRISM and MICA projects, and that lost it Dave Cutler and team.

            https://en.wikipedia.org/wiki/DEC_PRISM

            https://en.wikipedia.org/wiki/DEC_MICA

            If it had cancelled the 9000 early on, and put that budget into the PRISM hardware and MICA software, it might have had a chance.

            As comparison, the Alpha CPU and Windows NT were both salvaged from work on these cancelled projects. Both were significant successes in their time.

            Instead of competing with its own minis with MIPS boxes which could not run VMS, meaning that it had to press on with VAX processors, and later being outcompeted by the wider RISC market, it should have had its own, unified architecture. It nearly did but it backed the wrong horse.

            1. Roland6 Silver badge

              Re: Unix was always diverse

              Accept your points. WRT to a desktop VAX, I was perhaps looking at the business desktop PC market that was so critical to the ultimate success of Microsoft. Compared to CP/M etc. VMS was too much for Joe Public/mass market appeal.

        2. David 132 Silver badge
          Thumb Up

          Re: Unix was always diverse

          > most people didn't realize where networking was going in the early '80s

          So many unhappy memories from that period.

          Banyan Vines. NetBEUI. IPX/SPX. X.25. ISDN. Token Ring, arcnet, 10base2... *shudder*

          Kids today don't know they're born. *shakes cane at clouds*

          1. Down not across

            Re: Unix was always diverse

            I relied heavily on uucp and also C-Kermit as most things had a serial port.

        3. Roland6 Silver badge

          Re: Unix was always diverse

          >but most people didn't realize where networking was going in the early '80s.

          Whilst Ethernet had taken the world by storm by 1985, I would say it wasn't until 1989 (perhaps the autumn of 1998) that the future of TCP/IP -the QWERTY keyboard of networking, was 'assured'...

          Remember Mark Williams Coherent was based on a late 1970s Bell Labs Unix distribution, predating Ethernet and the Berkeley networking stack...

    3. Plest Silver badge

      Re: Unix was always diverse

      Darwin never meant better, he simply meant "more fit for purpose". Package managers for example, are mostly bloody awful with very few exceptions, I don't see anyone saying there's a gold standard and none seem to have won out instead we have a lot of adequate package managers that just about manage to do the job, "survival of the fittest for purpose", none of them are stunning just acceptable for the time being.

  9. NewThought

    Chrome OS?

    Maybe I have misunderstood something, but Chrome OS (a flavour of Linux) seems to tick the boxes:

    * regular unobtrusive updates from Google that just work

    * install apps from Google Play, and uninstall them when you don't want them any longer

    I understand that if you've got a job that requires power (e.g. full time video editing (occasional video editing is absolutely fine on a Chromebook)) or something else that's special, you'll choose a different device - but in terms of what this article is about, it seems perfect!

    1. Anonymous Coward
      Anonymous Coward

      Re: Chrome OS?

      An excelllent option with just two minor drawbacks for some of the commentards on here (me included):

      1) All the unremovable snooping baked into it, and

      2) All the unremovable snooping baked into it.

      I realise that technically speaking that's just one drawback, but I thought that it was such a big one that it was worth mentioning twice.

      1. David 132 Silver badge
        Thumb Up

        Re: Chrome OS?

        So no jet-powered rocket pants either then, Kryten?

      2. doublelayer Silver badge

        Re: Chrome OS?

        That is one drawback worth duplicating, but I have more if you're looking for a longer list:

        3) The fact that it doesn't allow the extent of tasks that other things with the kernel in them do. You have to hack your way in to replace Chrome OS with something else, using device-based methods, in order to get the kind of access that comes as standard with everything else. This alone is enough for it to lose the Linux brand in my mind.

        4) That the security updates that you do get will end at some arbitrary point for no good technical reason. Done to sell more hardware that will be just as capable as this was. On Linux, if your distro ends support for your version, you update to the next version and your support comes back. The ability to do this lasts until a technical reason makes your hardware obsolete, which is measured in multiple decades.

        5) The baked-in connection to everything Google, from closed-source browsers to single-provider services for backup. If you don't like one service on any other Linux, you replace it. Not so much with Chrome OS.

        6) All the unremovable snooping baked into it. (worth mentioning as many times as necessary)

      3. jake Silver badge

        Re: Chrome OS?

        And don't forget

        3) All the unremovable snooping baked into it.

  10. StrangerHereMyself Silver badge

    Micro-kernel

    Linux will either have to be re-written as a microkernel OS or it will die. I'm pretty sure the U.S. Government will mandate microkernels for most of its branches, since their improved security is proven better than anything Linux can offer.

    Also, I predict the U.S. DoD and NASA will mandate the use of the Rust programming language for their systems and embedded software in a couple of years.

    1. Paul Crawford Silver badge

      Re: Micro-kernel

      They won't, unless you have an OS and matching applications for it in wide use.

      Microkernels might catch on for IoT and similar but the effort of rewriting an OS and porting applications, or even just trying to make the API completely compatible is huge. It is why Windows is still in common use, because XYZ business demands ABC package and that is all that matters. Linux has taken a lot of areas, most cloud and web servers for example, and it is what I use myself for almost everything, but it has not replaced it for many and never will completely while something, somewhere, needs win32 compatibility to some odd or undocumented aspect.

      Add a new OS, rinse and repeat after 15 years.

      1. katrinab Silver badge

        Re: Micro-kernel

        Windows NT started out as a microkernel. Microsoft quickly discovered that this approach made it really slow, so moved the graphics subsystem into kernel space.

        1. Anonymous Coward
          Anonymous Coward

          Re: Micro-kernel

          > Microsoft quickly discovered that this approach made it really slow, so moved the graphics subsystem into kernel space.

          And, in doing so, great increased its fragility. Simply adding access to a share for a new employee was sometimes enough to cause it to crash. I don't miss those days at all.

        2. Liam Proven (Written by Reg staff) Silver badge

          Re: Micro-kernel

          [Author here]

          > Windows NT started out as a microkernel

          I think it is more representative to say "NT was _marketed_ as a microkernel."

          It wasn't really. It was a large monolithic kernel, which got larger when the GDI was integrated in the NT 4 release cycle.

          It wasn't a real microkernel, and neither _was_ NeXTstep or _is_ macOS.

          I have some Opinions on microkernels but that's for another time.

          1. Paul Crawford Silver badge

            Re: Micro-kernel

            Indeed not a microkernel but more of a modular kernel to begin with.

      2. StrangerHereMyself Silver badge

        Re: Micro-kernel

        I assume you've never heard of the Adapter pattern?

    2. jake Silver badge

      Re: Micro-kernel

      "Also, I predict the U.S. DoD and NASA will mandate the use of the Rust programming language for their systems and embedded software in a couple of years."

      Yep. Just like they did with Ada. With similar results, no doubt.

    3. jake Silver badge

      Re: Micro-kernel

      "Linux will either have to be re-written as a microkernel OS or it will die."

      No.

    4. Roland6 Silver badge

      Re: Micro-kernel

      >Linux will either have to be re-written as a microkernel OS or it will die.

      From my reading of the seL4 stuff, that is effectively what they are doing. So in a few years we will be able to compare monolithic Linux to microkernel Linux.

      > I predict the U.S. DoD and NASA will mandate the use of the Rust programming language for their systems and embedded software

      Is Rust better than Ada?

      1. bazza Silver badge

        Re: Micro-kernel

        Rust is different to Ada. "Better" depends on needs...

        Aside from the technical differences, Rust is proving popular. It's also easy to get the tooling. Back when Ada was young the tools were expensive... Rust's goodness coupled with popularity might be the only comparative measure that matters.

        But you'd not deploy it into a safety critical application yet. That is still Ada, or (weirdly) C/C++, thanks to the availability of certified compilers and libraries (eg from Greenhill).

        1. StrangerHereMyself Silver badge

          Re: Micro-kernel

          Rust is taking the world by storm. Applications written in it are unequivocally stable, which greatly reduces the amount of time to develop software and reduces the need to replace it with newer versions.

          1. fg_swe Bronze badge

            Indeed

            From my experience with memory-safe languages I can support "Applications written in it are unequivocally stable". The undefined behaviour of the C and C++ languages has real-world effects such as mysterious crashes and other mysterious "behaviour". Memory safety brings real improvements in terms of reliability, safety and security.

            Apparently, each true and factual statement gets some heavy downvoting here. I now take it as a badge of honor to get heavy downvoting.

            1. anonymous boring coward Silver badge

              Re: Indeed

              "The undefined behaviour of the C and C++ languages has real-world effects such as mysterious crashes and other mysterious "behaviour"."

              All C code I've ever written has had very well defined behaviour. (It's run by a computer, after all.)

              Perhaps you meant "unwanted behaviour when making programming errors"?

              Good programmers generally avoid those scenarios.

      2. fg_swe Bronze badge

        Rust vs Ada

        I assume most people know the C/Java/C#/C++ Syntax(curly braces and all that) and Rust *looks* closer to what they know, as compared to Ada syntax.

        Semantically, Spark Ada definitely looks very interesting.

        I am not an Ada guy, but found this: https://www.adacore.com/uploads/techPapers/Safe-Dynamic-Memory-Management-in-Ada-and-SPARK.pdf

        Please also look here for a POSIX compliant OS in Ada: https://marte.unican.es/

        The short answer is that Rust is safer than traditional Ada. They did not have safe heap deallocation.

    5. Someone Else Silver badge

      Re: Micro-kernel

      Also, I predict the U.S. DoD and NASA will mandate the use of the Rust programming language for their systems and embedded software in a couple of years.

      Can you say "Ada"? I knew you could....

      1. StrangerHereMyself Silver badge

        Re: Micro-kernel

        Mandating Ada wasn't inherently bad. The DoD just had the bad luck that the language never saw any take-up beyond the defense industry.

        In the end they had to shelve it because there simply weren't enough experienced Ada developers around to work on software the DoD needed and wanted.

  11. chris street

    ZFS solves a problem that needs solving - raid 5 write holes. Copy on write, checksumming, very large file sets, all useful good stuff. Btrfs does the same thing and despite being complex as hell. I'll even allow that systemd is a good thing overall, despite it's tendancy to reach out tendrils everywhere, it does solve problems.

    What precisely does the abominal problem children called snap and flatpack solve? They bloat stuff up, and take control away from me. I want updates WHEN I choose - not when some faceless gnome decides to push shit out to MY servers and desktops. They offer nothing beyond apt or yum for my convenience.

    1. doublelayer Silver badge

      Not that they always do it well, but they allow people who aren't as familiar with administration as you or I (and sometimes us too) to use programs without the dependency snarl that arises whenever you can't find it in your repositories or you don't want to use the one in there.

      I had a program I wanted to use. It needed a recent LTS version of OpenSSL. I wanted to use it on an OpenSUSE system that didn't provide that version of OpenSSL (they were on an older stable version). The program concerned wasn't in the repositories at all because it was a new project. Incidentally, if you downloaded the binary the author made, it wouldn't run either because it expected a later version of glibc; the author had compiled for an older version to have compatibility with an older version of Debian, but not so old that it worked with the version the user was on. To get this to run, I had to compile several things from source, including OpenSSL, then modify the makefiles to point to my portable versions of the libraries, then tell the admin who wanted to run it to make sure not to jumble these libraries with any of the other copies around the system. I should point out that the OpenSUSE version being used was still supported at the time and I didn't have the authority to make anyone update distros. This is what can cause problems and why a system for packaging dependencies when the repositories don't have them has been needed. If you don't need it, feel free not to use it, but people wrote it because it was solving a problem.

      1. chris street

        "If you don't need it, feel free not to use it, " - so why does Ubuntu use it for Firefox then - when there is a working version in the repos?

        And it's not solved a problem - because as you demonstrated there are other ways to solve the problem that still leave you in control of your systems.

        1. doublelayer Silver badge

          "as you demonstrated there are other ways to solve the problem that still leave you in control of your systems."

          I thought what I demonstrated was obvious but evidently not. What I demonstrated was that there was a problem requiring manual effort to build a fragile set of files to run something on a specific system, effort that would need to be repeated if the environment changed, effort requiring modification of source code (limited, but some) that not everyone knows how to do quickly or at all if we're including nontechnical users. That work was only possible because I had the source for everything, and if it was so easy, maybe the admin who wanted the software could have built this himself instead of having me do it for him.

          Do you want a year of the Linux desktop, whether it's likely or not, because sometimes, people want to install some software and it's not going to sound good when the instructions say either "build the dependencies from source which you can find yourself then modify the build scripts to link to those instead of the system ones that won't work" or "not everything in here's open source so your tech guy probably wouldn't run it, but if you want to, you can try building a chroot and getting libraries from some other distro and maybe that will work". Having a pre-built file that contains the dependencies makes that kind of instruction unnecessary, and it means someone who doesn't know how to or someone who doesn't want to go to that effort can still run the software.

          1. martinusher Silver badge

            The alternative in the closed Windows or Mac environments is essentially "Throw away your equipment and buy new stuff". Yes, its a pain to have to build stuff, especially when people keep changing versions of things like compilers for no particular reason at all apart from always wanting to be on "the latest", but its either build or recycle. There is no middle ground.

            I am always wary of user friendly gizmos because often all they do is wrap a dialog box around a command line utility. This works provided your workflow matches exactly what the designers had in mind but if it doesn't you're facing a confusing set of (often out of date) help information (assuming its there in the first place), an often misleading wiki (assuming you can access it) and eventually the prototype command (assuming you can find it). I always marvel at the way that Apple, for example, can take a simple concept and turn it into a convoluted nightmare designed with only one purpose in mind -- feeding money into the company. (Windows isn't much better -- its always grated on me that they used the backslash for path separators, are stuck with drive letters and call directories 'folders', little meaningless things that are designed to differentiate their offerings but invariably end up causing problems.)

            1. doublelayer Silver badge

              The alternative in the closed Windows or Mac environments is essentially "Throw away your equipment and buy new stuff".

              No it isn't, and you know that. The alternative in the Windows world is "bring your DLLs with you and don't put them in C:\Windows anymore". Programs were terrible for violating that a couple decades ago. Not so much now (yes, there are always exceptions, but it's a lot more common to have self-contained program directories now). Apple has the same thing in the form of app bundles, which works better for GUI applications than for CLI ones. The point is that if some application needs a specific version of something and can't accept the OS-provided one, it brings that version and it stores it in such a way that it won't override anyone else's copy.

              I don't know where you got the "throw away your equipment" part of this, as even the most polluted Windows or Mac OS installation, and for that matter Linux or BSD installation, can be wiped and reinstalled from scratch without having to do anything to the hardware. The benefit of packaging things with lots of dependencies is that that pollution is harder to build up.

          2. chris street

            Lets look at it this way...

            Product X doesn't work because distro has library issues etc...

            Vendor can either fix these issues and get it included in the distro, or build a repo for it.

            Or build a snap, that screws up home folders, takes control away from the end user, is a fecking PITA for secuirty, is slow, is bloated, gobbles disk space, is not transparent and doesn't follow the unix ethos.

            Why would snap ever be considered a good idea? Just deploy the thing properly in the first place FFS. It is MY system - if you want me to use it, you need to remember that.

            1. doublelayer Silver badge

              And if it's a closed-source commercial application, they can do that. If it's an open source application, they may not want to build their own repository system for every distribution that has ever existed, but they still might want the nontechnical user to be able to install it on everything. Portability isn't just on the developer; it can also be on the OS provider.

              The last thing I want to say to someone I've introduced to Linux is "yes I know you can download basically any Windows or Mac OS application off the internet and just run it, but for that one you want I'll need to repackage it because it wasn't built for your distro. Yes, your distro, it's a collection of components around Linux. Yes, you're running Linux but not all Linux systems are the same. No, there's not a true Linux that has the right version of everything, it's just lots of choices and they don't always work together. Yes, if you were still using Windows you could download that Windows file and just click on it. Never touch your computer again? If you say so." That doesn't instill confidence, and Linux doesn't deserve that when it can be solved. Again, if you don't want to use any of these packaging systems, feel free not to, but they do solve a problem for some people.

              1. chris street

                > Again, if you don't want to use any of these packaging systems, feel free not to,

                Which brings us back to the point - I'm not ABLE to do that because Ubuntu is forcing them upon me...

                Not that it matters now with the decision to jump distro to Mint after the advertising debacle but still...

      2. yetanotheraoc Silver badge

        without the dependency snarl

        Snap and Flatpak also have dependencies, albeit only at the interface with the OS. With a few such programs that doesn't seem to be a big deal. When *all* of userland is Snap (or Flatpak), then you will be dependent on the packagers, and will have effectively opted in to a walled garden. Unless you think it would be simpler to build your own Snap package from source.

        1. doublelayer Silver badge

          Re: without the dependency snarl

          Of course they have dependencies, just like anything else. The benefit is that, if you already have them, you can avoid other dependency problems. As for walled gardens, it's only one of those if I'm required to use these packaging systems. I'm not. If I want to build something from source and run it, I can with no difficulty. That some other components have been packaged that way does nothing to prevent me using an unpackaged binary, not that building a package is a particularly challenging task compared to building from source (I wouldn't, but generally because if I've built from source then I'm not trying to distribute the binary I just made).

        2. FatGerman

          Re: without the dependency snarl

          In what sense is apt/urpmi/ etc NOT a walled garden?

    2. Liam Proven (Written by Reg staff) Silver badge

      [Author here]

      > What precisely does the abominal problem children called snap and flatpack solve?

      That is part 2 of this article. Which I am working on when not replying to comments here, but I'm looking to see what issues people are raising so I can address them.

  12. katrinab Silver badge
    Trollface

    zfs is The One True Filesystem, and it is totally understandable why people would want to bring FreeBSD's biggest USP to Linux.

    btrfs is a nice idea, but given that it has been around for 13 years, and still isn't ready for production use, I doubt it will ever make it. Probably everyone who needs those sorts of features has been hapilly using zfs for the last decade or so.

    Snaps need to be taken round the back and er snapped.

    Flatpacks seem to be a perfect exampe of https://xkcd.com/927/

    1. Paul Crawford Silver badge

      btrfs was started I think by Oracle as an alternative to ZFS offering stuff like checksums and snapshots, but once Oracle bought Sun Microsystems they didn't really need it any more.

      Just a shame ZFS was not re-licensed by Sun before they were gobbled by Satan....

  13. Anonymous Coward
    Anonymous Coward

    ...and in the next instalment...........

    @Liam_Proven

    Quote: "...Make Linux safer..."

    (1) Define "safe".....and then define "safer".....

    (2) Does "safe" or "safer" imply that network security is part of the discussion?

    (3) Given hundreds of distributions (see distrowatch for details)...let us know if these discussions about "Linux" apply to all distributions.

    (4) Define the scope of the problem more tightly. Is it workstations with GUIs? Is it just the kernel? Do headless servers have the same problems as other implementations?

    Another quote: "Doesn't the old stuff work? Well, yes, it does – but not well enough."

    (5) Define "old". Then there's "not well enough"....for exactly which user constituency?

    Maybe in the next instalment, we might see some discussion and some clarification.

  14. Anonymous Coward
    Anonymous Coward

    Bah.

    I was introduced to AT&T System 3 on 3B2 hardware in 1994. A quick ls /etc/ /bin and an hour with man, and I could manage the system.

  15. fg_swe Bronze badge

    Also: Modern Day Unix

    MacOS X: very nice Unix system (from mechanics to GUI) running on top of superfast/superefficient ARM cpu.

    iOS: locked-down Unix.

    Android: locked-down Unix.

    OpenBSD, FreeBSD: important players in several applications, some of them of strategic relevance.

    For better or worse, Unix dominates the computing world. Also, it is much more than the Linux kernel.

  16. Binraider Silver badge

    If only there were a single source attempting to co-ordinate the production of an entire ecosystem...

    GNU 's Hurd project spent a lot of time and effort not agreeing how to do a Kernel. Enter Linux.

    An accident of the modular design is of course that anyone can rock up and lump things together. Sometimes it works. Others, you get SystemD or PulseAudio.

    Fragmented efforts have some downside, but also, some upside. When a design paradigm lives or does by survival of the fittest there's a kind of Darwinian evolution at work. If RHEL had its way and cornered the market, I'm pretty sure Linux would not be as healthy as it is.

    But it's not immune to crap and we have to participate as users, journalists and developers to keep the ideals alive.

    Failing that I'll just get a crapple for consumer stuff and stick to old computers.

    1. Anonymous Coward
      Anonymous Coward

      @Binraider - Like for instance

      Google coordinating the standards for the entire Internet browser ecosystem ?

      1. fg_swe Bronze badge

        They Wish

        Good Old HTML is very much alive. It can be read using NetSurf and other little browsers.

        Mind you, it is called World Wide Web, not Elite Controlled Mainframe.

        Run your own little server behind your DSL modem and be free from the whims of the oligarchy.

  17. Vocational Vagabond
    Devil

    "Gray Bearded" .. Indeed...

    I object, I was only *Grey* bearded that one time I was locked down because 'covid' . . . and it's now bash and YAML dependant Ansible playbooks . . and people call me a dinosaur..

  18. RedneckMother

    this may be redundant...

    I haven't read all previous comments, so "EXCUUUSE MEEE" if I am being redundant.

    The reason WindBlows took off was because MS gave away the development tools to programmers (back in the day). It was cheesy and crappy, but they got to market first, and dominated the desktop application space.

    Also, MS played a long game against IBM and OS/2.

    I was one of those people who supported various proprietary *nix platforms (achieving certs for several, and fighting all the conflicts). *nix finally began to compete with the adoption of Linux on multiple platforms.

    Many years ago, I was able to banish MS from my household. I have various Linux distros, on various hardware platforms, and am (mostly) happy with the interoperability (and VERY happy with the adherence to standards).

    I don't do windows.

    1. fg_swe Bronze badge

      Not Missing Windows Either

      I use Linux and MacOS. The latter is a Unix with very nice GUI, ergonomics and comes with a nice Apple office package. Compilers I can get from brew and the bash command line feels like any other Unix.

      OpenOffice and Linux in general does not look as polished, but certainly does the job, too.

      Buy a Linux computer from a Linux vendor, if you don't want to spend many hours driver-hunting/compiling.

      Only the business folks *think* that they need Windows+Office.

  19. Missing Semicolon Silver badge
    FAIL

    But they're not finished

    I wouldn't mind Snap or Flatpak so much if they weren't so broken. Since they run the app in a sort-of-vm-jaily-thing, it doesn't integrate with the desktop properly (unless you run precisely the same desktop as the original author built for), both in appearance and in access. We have all seen Snaps that won't access non-local filesystems, and don't behave well on the local network.

    The UI is really a stinker for me. Most snaps look like remote desktop sessions to somebody else's' computer (running Gnome).

  20. Someone Else Silver badge

    Depends on your definition, I guess...

    From the article:

    Modern Windows is based on Windows NT,[...] and was a modern, hi-tech OS from the start.

    For rather small values of 'modern', 'high-tech' and 'OS'.

    YMMV, of course, but probably doesn't, if you're honest.

  21. fg_swe Bronze badge

    Turn Windows Into Unix

    I find the Cygwin toolkit a very useful extension of Windows

    http://cygwin.org/

    It gives me much of the power of Unix on Windows:

    perl, wc, egrep, wc, sed, vim, ctags, ls, sed, gcc, make and so on.

    Much more powerful than the simpleton cmd shell of Windows. No need to learn powershell.

    Many(most?) Unix programs can be run nicely on cygwin, including many which need an X11 server.

  22. ecofeco Silver badge

    Excellent article

    See title.

    Great article.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like