
All to be superseded
I’m just waiting for the Systemd package manager and file-system re-organizer. Only a matter of time.
A new version of Linux distro NixOS has been released, just one day after a contentious blogpost that asked "Will Nix overtake Docker?" For DevOps folk, this was tantamount to clickbait: Nix and Docker are different tools for different jobs, and anyway, it's possible to use Nix to build Docker images. The distro, which hit …
Consumers have this nasty habit of wanting things updated, especially when they're always online and running software from other people. In DOS days, maybe a read only OS would still have worked, but by now, it would not. We could try monolithic OSes where everything is updated at once and can only be replaced in its entirety, but that's just going to hide the small changes that still happen at the cost of a lot more bandwidth and storage used for updates.
It all started going downhill when people changed /u back to /usr ... Us lazy bastards at Berkeley had changed AT&T's /usr to /u in the name of brevity, but apparently it made things hard to understand for newbies.
/u (or /usr) always had non-home directories & other stuff in it ... Source code, documentation, the man pages, user installed binaries available to everybody, and other useful tat like vi, EMACS, UUCP and the BSD games pack.
It was both useful and logical to split it into /usr and /user when the system grew large enough, and had many users. Then some bright spark decided that /usr and /user was too complicated due to their similarity, and thus /user became /home ... except in the appleverse, where they "simplified" it to /Users (Caps in a system directory name? WTF‽‽‽).
"thus /user became /home ... except in the appleverse, where they "simplified" it to /Users (Caps in a system directory name? WTF‽‽‽)."
I have to ask. Why is that a problem? I distinctly remember you defending the case-sensitive filesystem a while ago (I was too), so you don't seem to mind having capitals in some filenames. So why must a system directory be lowercase if other ones can be uppercase? When I use Mac OS, it's a little annoying so I would prefer lowercase, but I don't think that's a firm rule. It also allows them to use PascalCase for multi-word names, which is probably why they're doing it.
It's not really a problem, more of an irritation. The filesystem is a major part of the operating system as a whole. It is fairly consistent across unicies, and has been for a long time. Making changes like that break who knows how many scripts that would otherwise function perfectly well after merely copying them from system to system. And this change on Apple's part is for what reason? Near as I can tell, it's nothing more than an affectation. There is absolutely no reason for it at all. Gets on my craw, it does.
Note that Apple isn't the only company to do this kind of thing, I just used them as an example that would be familiar to most ElReg readers.
Before anyone asks, yes, the naming of the X11 directories also get on my craw ... but I've learned to live with it over the years.
>It all started going downhill when people started putting stuff that wasn't users' home directories into /usr
Who or what is a user? One of the things I've had to overcome is the widepsread assumption that a "user" is solely a human glued to some kind of user interface. (Windows people are particularly prone to this notion.)
> All the enterprise Linux vendors are working away on this... a thoroughly tested and integrated image which end users can't change and don't need to.
That the end user CAN'T change. How do they square that with the principles of free software?
At the very least they are removing users' freedom to change and improve the software and to distribute copies of modified versions to others. Half the four essential freedoms of free software!
How do they license this resulting code?
The catch is that Linux lives in different worlds, each with its own (often legal) requirements.
Changing the software and stuff like is part of the hobbyist and testing world. These often live in small, mostly-controlled environments where the consequences of Things Breaking are minimized if not controlled.
But in an enterprise setting, stability is paramount. Downtime costs money, and "Things Breaking" can result in bills if not lawsuits, so some of those freedoms have to go in the name of legal compliance (which trumps the Four Freedoms).
As a programmer of 40 years, commercially for 25, you don't need to explain the corporate IT world to me. My current employer has contracts with everyone from Santander to NASA.
Having a stable platform does not require removing user freedoms. In this case the user being the enterprise still deserves the same freedoms as a lone user at home.
one freedom that open source software brings to the enterprise world includes the ability to FIX THINGS YOURSELF.
So any package wrapper that includes all versions of all libraries that are natively compiled and guaranteed to work together needs a SOURCE PACKAGE to build them all together, as well as run them all together. I am not familiar enough with these systems to know whether or not they already do this, but it would be akin to using the 'ports' system on FreeBSD to do 'build from source' on EVERYTHING.
Now if an executable ships as binary with a container, the GPL'd lib sources and binaries would need to be there too, so you can fix things. Making a "source package" for the entire container should actually make GPL compliance easier (and more portable).
The right to fix things oneself doesn't necessarily accord the ability along with it. If you believe that, consider the adage of "If you want something done right..." against cryptography. Simply put, it's not for everyone, especially if you have to deal with Stupid every day (the kind of Stupid that makes the "Who, Me?" section here at El Reg).
If you are a cog, you have no say in what the machine does. You have two options: Suck it up and accept it like the rest of the sheeple ... or you can move on to greener pastures (starting with attempting to change the culture where you are now, if you like).
Your mythical Captain Peachfuzz (usually the Nephew of The Founder, or the like) can be treated as the speed-bump that he is. Rule one: Get it in writing, with a wet signature.
I'm a man, not a number. Yourself?
Which is kind of my point. You want to present immutable systems to the cogs so they don't accidentally become spanners. Said nothing about you making those immutable filesystems to your specifications. There's your Four Freedoms where it matters most.
As for Rule One, that wet signature can often (a) be in vanishing ink or (b) be overruled with some good lawyers behind the scenes.
As for me, I am both. The two are not mutually exclusive, and it's a necessary evil in today's society. That little government ID tends to drive the point home.
...and, ultimately, the perfect metaphor as to why Linux will NEVER make significant inroads on the desktop.
The "majority" of users, quotes because the term is being technically used here, as in provably statistical, can not and do not understand the inner workings of both computers and the code that they run. They are simply the dictionary definition of "user", and nothing more.
So every time the Linux hordes expect that "user" to become a "technician", every time that Linux noob gets a "RTFM" reply to a problem that either should not have been there in the first place or conversely had a much more instinctive solution from the beginning, is simply putting another nail in the coffin of "DONE!" to the adoption of desktop Linux.
On the other hand, MeDearOldMum and GreatAunt have been happy Linux users for many years now. Since moving them from the world of Windows to Slackware, their support calls to me have fallen from several times per month for the bastard child of Redmond, to none (zero, zilch, nada, 0) for well over two years now on Slack.
Linux works perfectly well on user desktops, as long as the wetware of the installer understands the needs of the user.
"And that's what the one-size-fits-all preordained enterprise or distributor build fails on."
As I've been saying for at least a decade ...
Did you cut'n'paste that from 2004?
Sure there are reasons that Linux has not "made it" on the desktop (not that I care - I'm happy enough just to get on with using it), but that's no longer one of them (if it ever was), nor has it been for some time.
But I suspect you know that, and simply forgot the troll icon.
From what I can see they are wanting to achieve is an ISO image which doesn't allow the end user to make changes. Much like how Android phones don't by default give the end user root access to the filesystem. But yet the AOSP source code is still available under an Apache and GPLv2 license so you can modify the code and compile your own if you wanted.
Er, I think you might be mixing apples and oranges here.
The four freedoms apply to source code, to the ability to develop and distribute your own software based on others work.
What they're trying to prevent is people like me having a decaffeinated moment and typing rm -rf * at the root directory and then screaming in panic when they realise what they've just done.
So I can easily imagine a "DontDeleteStuffYouShouldntByAccident" package being GPL... at least until Skynet becomes self-aware, starts listening to The Cure and then decides to delete itself, that is.
> [ ... ] typing rm -rf * at the root directory and then screaming in panic [ ... ]
There's a perfectly valid prevention for that particular accident: rm -rf
or rm -f
does nothing when getcwd(3C)
is "/" or when the removal target is immediately below "/".
The command that can delete in "/" is called rootremove
or slashremove
or some other such unintuitive thing that can't be typed accidentally.
Solaris has been doing this for many years. I don't quite get why Linux didn't adopt the idea. Most Linux distros rely on aliasing rm
, and that is very brittle.
Nah. I admin my own computers. The only person who can take me down is me[0]. And if I do, so be it ... at least I'll have learned something. Having proper backups and automatic fallover hardware redundancy in key systems helps with piece of mind.
[0] To be perfectly honest, the Wife has access to the safety deposit box with the BigBookO'Passwords[tm], just in case I step in front of a bus. She's *nix literate (finally!) and knows how to access everything, but she's never seen reason to use root, even on her own computers (Slackware and BSD), so she doesn't.
So you're an advocate of stupiding the world down to the lowest common denominator?
When I was younger I thought I could change the world. Now that I've been teaching on and off for about 45 years, I've come to the realization that probably nine out of ten humans are ineducable beyond "eat here, sleep there, bathe occasionally & don't poop in the living room".
I can live with that. Just don't ask me to join them in their mire in the name of you separating them from their money. That's between you and them, kindly keep me out of it.
... and it irritates the pig.
No, I'm an advocate of trying to find a way to fix Stupid. Hopefully not by natural selection to avoid excessive widows and orphans, because as history has shown, Stupid can easily bring others down with them. You can try to stay out of the mire, but often the more comes to you, at which point your options are limited.
someone will still find a way to do it, due to ...
... remote login that terminated without being noticed. Like: you were logged in a remote embedded system that you were setting up, it failed, and you want to start over again. You want to erase everything from the root ... but in the mean time, a colleague called for coffee break, and when you come back, the session has been terminated and the dreaded command is executed locally, not on the remote system.
Has happened, not on the root filesystem but on the users $HOME.
"There's a perfectly valid prevention for that particular accident:"
Yes, but how often does that particular accident happen? It's pretty deliberate, you have to be root, not know where you are in the filesystem, and frequently use the -f flag without knowing you have to. It surely happens, but I think it's probably minor compared to the various other ways one could destroy a system by accident (it's much easier to accidentally destroy something with a dd command typo).
1. http://www.islinuxaboutchoice.com/
2. It's all 100% open source software. You can do whatever you want with it. This is about solving problems and improving operating system design, not restricting freedom.
3. As a user you can still change it. Commands and documentation are provided. The point is you don't want to.
Creating a strong separation between "OS" and "Applications" is the end goal here.
With traditional Linux distribution tools and approaches if you want to downgrade your version of Libreoffice or get compatibility with a different version of Blender3d the only way to end up with a supported configuration is to re-install your operating system.
That is a complete shit design. Nix helps solve this. OSTree and Flatpak helps solve this, so does OCI containers. There are a variety of different approaches with their different merits, but apt-get and rpm and dnf definitely do not solve this.
"With traditional Linux distribution tools and approaches if you want to downgrade your version of Libreoffice or get compatibility with a different version of Blender3d the only way to end up with a supported configuration is to re-install your operating system."
Horseshit. Total, utter and complete horseshit.
Open Source is about choice.
I run open source software (in my case, Debian GNU/Linux) because of the freedom it gives me - I know that if I don't like one solution, I can install another or write/hack my own.
I know that for this reason I will never have DRM or other "pay us to remove" anti-features in my software. And I am aware that "payment" includes surveillance data for many "free" closed-source apps. If any open source app were to start spying on people, there would be an easy solution involving a ubiquitous piece of cutleryware.
I agree that sometimes packages can be broken downstream in linux distributions, but reinstalling the operating system (even if you count debootstrap/chroot as reinstalling the OS) is never the only option. There's always a way around any problem, that's one of the most beautiful things about Open Source.
Whereas In your pre-packaged containerised world, you remove that freedom from users, by making it difficult for them to fix issues with your software by themselves, and make it more difficult for them to make it work with other pieces of software that you hadn't personally envisaged. But maybe you are flogging some closed-source, DRM-infested spyware/crapware and you don't want your customers looking too closely at what you are doing on their computers - In which case you are on the wrong platform mate, you can go and take a Running Jump.
Rather than force users to have some god-awful containerised linux system like Snap or Flatpak (or worse, Android, which just bungs a JVM on top of Linux and hides away the workings completely, and actively punishes users for trying to build applications from source - oops, you have debugging enabled - safetyNet failed, no DRM apps for you) I would prefer to educate and encourage users to build apps from source, so that they can understand how the system works, fix bugs or add missing features themselves, and contribute to the community project that they are using.
If I really don't like something in a Debian package, or some version that I want has fallen out of compatibility with my system packages, I can always run apt-get --build source [package], and fix it myself.
"I would prefer to educate and encourage users to build apps from source, so that they can understand how the system works, fix bugs or add missing features themselves, and contribute to the community project that they are using."
And if they reply, "I ain't got time for this! JFDIOE!"?
Depending on who and where you are, and who is paying the bills, one answer is "But this IS the way it is done!" ... if they are so computer illiterate as to believe you, you're on the first step of the ladder to their education.
Building apps from source ain't exactly rocket surgery. Any idiot can do it.
GoboLinux looks like the setup we had at work in the early 1990s.
Each workstation had mounted a NFS share containing all the company and public domain (now open source) executables and libs. All the shares were kept in sync. So releases were built and copied to one place by a central source code team and processes rdisted them around.
So easy release process and all users had the same versions or the end apps but each app could use different versions of libraries - so a new library could be released without forcing everything to upgrade.
The build logs would also show the exact version of the libraries used so a repeatable build if needed.
Yes, "... in folders with names based on cryptographic hashes" means they are reproducing Windows registry type problems and all the other problems created when people say "let's use binary!" instead of text.
In case you haven't heard 'accessibility' is high priority. Making the sighted feel blind is not helping that.
Using the wrong tools for the job has become THE requirement in all fields these days.
Rube Goldberg is laughing in his grave. Computers were supposed to bring efficiency and do all the heavy lifting. Instead, the powers that be have given us complexity for the sake of market lock in. And most of them are broken beyond Kafka's worst nightmares.
Much as I love Linux for a great many reasons, the one thing that has long been an annoyance to me, as a user, is its shambolic file directory tree.
Before of you very techhy old-timers blow a gasket at that - yes, history, things just grew, different people made different decisions about where stuff should live and it all somehow ended up with what we have today. I understand this (at least in general) and that hindsight is a wonderful thing. As is having a free operating system which I'm not even remotely capable of contributing code to.
However - I have, on occasion, had the experience of installing a piece of software called, say,"Fred" onlyto find that Fred doesnt subsequently appear in the menu system. OK, so I try typing fred into the command line. No joy. So I try Fred - still no joy. So I try searching for files with filenames including the string "fred" - and if I don't succeed then, I simply give up and ask the package manager to uninstall Fred.
I don't much mind if the core of the OS is in its own directory or even one or more hidden directories, but stuff that I can install via the package manager - it would be sooo nice to just be able to look in the Programs folder, find the relevant folder for whichever program is giving bother, and KNOWINg that whatever is amiss it's in THERE, not in some unknowable place due to the arcane whims of a software dev who may or may not have used whatever are considered best practices for where to put stuff.
That aspect of Gobo linux looks great to me. Not keen on the compile it yerself aspect though. But I hope their idea re the filesystem prompts other distros to also do something to make the directory system easier for mere mortals to comprehend!
Useful command; thank you! I must try to remember it.
The snag is for those of us who distro-hop fairly frequently is that *any* distro- or packaging-tool- specific command doesn't help if it's a Flatpak or a Snap or an AppImage or you're not on a Debian/Ubuntu-family distro, or not on a Linux at all.
Example from last week: on Ubuntu, I installed both a text editor and a PDF reader that were in cross-distro packages: one a Snap, the other a Flatpak. I then couldn't find the command to invoke the damned things. This suggestion addressed my exact problem, *but* it wouldn't help because the packages were not .DEB ones.
Oh, and the packages did not appear in the desktop until after I rebooted.
Anything distro-specific is more or less my last port of call and I will probably only try it when I find it via Googling.
The need here is for them to be findable, *without* a reboot, using only generic Unix tools such as `which` or `man -k` or `apropos`, or in the desktop's app launcher -- whatever desktop, whatever launcher. It _must_ be cross-desktop or it's NFG.
"The need here is for them to be findable, *without* a reboot, using only generic Unix tools such as `which` or `man -k` or `apropos`, or in the desktop's app launcher -- whatever desktop, whatever launcher. It _must_ be cross-desktop or it's NFG."
Then you're chasing unicorns because there will be setups out there that will be intentionally incompatible with another's standard. Any setup you find will not work somewhere. It's like trying to find the one question that will get you the same answer from everyone.
"Then you're chasing unicorns"
Says the guy on record as trying to fix stupid ...
As a side note, a combination of find and grep, perhaps with the addition of a filter or two (to dig into archives), will easily do exactly what TOA wants. This has been possible for well over three decades.
"Says the guy on record as trying to fix stupid ..."
At least I have a good reason for doing so. If we don't stupid will take the rest of us down with us. We're already seeing it in the political arena.
"As a side note, a combination of find and grep, perhaps with the addition of a filter or two (to dig into archives), will easily do exactly what TOA wants."
But try telling that to the average Joe. That's my point. You're talking GeekSpeak, for all they care. Plus, Murphy can still strike. There can either be (a) more than one match or (b) a match that's actually not a match.
I don't much mind if the core of the OS is in its own directory or even one or more hidden directories, but stuff that I can install via the package manager
I agree that there is a bad confusion in the Linux world: the core system – as in an OS = Operating System – and the user's applications should have different treatment. In an ideal world, the OS should be quite minimaliste to be able to ... well, operate the system: kernel, init system, minimal set of commands? May-be add a graphical layer if it is considered a core system. BUT: leave out all the user applications !!!
And I think that is exactly what the /usr was meant to be: but then came /opt, and /usr/local, and then the braindead RedHat decided that /usr should be merged down to / , and the idiots from Debian follow like they did with SystemD.
So PLEASE: come back to the old Unix way of doing things and everything will be fine: put all the core system at the / (root) and all application stuff in a separet directory. And since /usr is already there, just use it !!!
If Fred's installer puts it somewhere where you can't find it, doesn't document it and.or doesn't add it to the menu then the problem is entirely down to Fred's installer. Whoever put that together would probably do the same thing whatever the OS and file system and percussive education would be appropriate.
What kind of Linux-head are you? The binary is *obviously* called joesbigbrother! /s
(Sadly this is not *quite* as sarcastic as it should be, I have in fact run into exactly this disconnect between package name and actual runnable program file name and menu entry name.)
I've used Nix, a software project I was taking a look at used Nix for reproducible builds (ultimately building firmware images to load onto a device.) I think some packages (edit: excuse me, closures*) had prebuilt binaries (with the option to build from source, but the binaries had checksums etc. to ensure they were identical to what the build would produce anyway.) The source builds, it has some tools that strip out whatever variant information (dates & times I suppose) that the compiler or linker throw into the binaries so it can directly check the result of the build is identical to what it should be.
I found it rather difficult to use; but, it put's everything in /nix/, there's subdirectories under there with the inscrutable hashes as alluded to by TFA. Each app has the ldconfig, path, etc. set so they're not expecting anything in the traditional /bin, /sbin, /usr etc. (your home directory does stay in /home, at least running Nix on a normal system it does.) It has a vaguely FreeBSD Ports or Gentoo portage style set of directories with available "packages" (I don't know if that's what Nix calls it) in there.
But, if you have multiple programs that are built on top of the same set of libraries, you do in fact have only 1 copy of those libs taking up space. But I don't know in practice how this works out; if you wanted to "update", say, gtk or libc, you either have to update a large number of packages with updated dependency (and compile them, either on their end or yours...), or alternatively the packages "drift" in what versions of libs they want and you do end up with multiple library verisons building up.
*re: closures. That was one issue with Nix, I found the terminology confusing as hell. Nix has done a mathetmatical proof that the builds are complete and reproducible -- among my comp sci. classes, I took an algorithms class that was heavy on the mathematics of proving if a bit of code was O(n), O(n^2), etc., and still found the terminology rather difficult just for the section on how you're supposed to use the darn thing let alone the proof.
Linux is weird, because it seems to be heading in two directions. At one end, it's a simple, lightweight OS running my router. At the other end it's a bloated impenetrable monster, with a vast monolithic kernel and increasingly convoluted ways of installing and managing applications. How long can this go on for?
You seem to be missing the distinction between "The Linux Kernel" and "a KitchenSinkware Distribution".
The kernel, in essence, manages the hardware it is running on. That's pretty much it. It is quite boring, and the average end-user never even knows it exists.
A KitchenSinkware distribution adds all the software bells & whistles for your every pointy-clicky delight.
Thus, your router and KitchenSinkware are two completely different distributions, serving two completely different needs. So of course they are different ... even though they both can run the same, exact Linux kernel. (Although the kernel in your router will be compiled for that specific hardware, and the one in KitchensSinkware for as many hardware types as possible, thus the size difference.)
That's why Ubuntu (and clones) have all the issues that Cupertino & Redmond based OSes have, and for all the same reasons. Trying to make one desktop OS that works for all users, everywhere, inevitably makes for a bloated, buggy code-base. You'll have better luck with a more targeted distribution ... or better, learn to build your own custom version.
"All the enterprise Linux vendors are working away on this. The goal is to build operating systems as robust as mobile OSes: "
Ah, fuck me! There are few computers I hate as deeply and genuinely as my Samsung Galaxy 8 cell phone. It should be SUCH a cool and capable device - it's bloody tiny, has scads of memory and CPU power, and a really great screen. But instead, it's a locked-down piece of corporate vomit, designed for people too stupid to count with both their fingers AND toes, and with all the user-customization capabilities of a maximum-security prison cell. I really, earnestly hope the major distros aren't aiming to take Linux in that same brain-dead direction of top-down neanderthalism. That would be the greatest shame of all. As if systemd (Fuck systemd!!!) wasn't bad enough...