Isn't systemd great?
It saves quite a bit of work for the BOFH, doesn't it?
"My home directory is full, and I need more space"
*klickety-klick*
"you now have 20MB free"
"oh, great, now I have a total of 40MB!"
"No, 20MB, and they are free."
Following closely after systemd version 256 comes 256.1, which fixes a handful of bugs. One of these is emphatically not systemd-tmpfiles recursively deleting your entire home directory. That's a feature. The 256.1 release is now out, containing some 38 minor changes and bugfixes. Among these are some changes to the help text …
Indeed. I remember when 20MB was as infinite as the Grace of God.
In fact, it was the (nominal) capacity of my first ever hard drive.
I, too, feel old.
I've just discovered my local brewery deliver to your door, in re-sealable bottles - which they give you a discount if you return. The fresh stuff only keeps for a few days - but they also deliver bottles. And then they do something like a wine box - but with 10L of lovely beer - or a big barrel for parties.
This is very dangerous knowledge to have. Particularly as now they've got my email address, they can tell me about all the seasonal beers they do - and it's only a £10 minimum order to have some delivered.
If I'm not careful, this may be my last year of feeling/being old.
Cheers for the beers! All hail to the ale!
That is indeed dangerous knowledge, and should be used sparingly, if only for the sake of your liver!
My own revelatory moment was when I discovered that in the country that's now my home, beer can be purchased in 64oz reusable "growlers" (stop sniggering at the back there, Brits) for a considerable saving over the per-pint price. For those evenings when a can is far too small, yet drinking an entire bottle of wine is de trop, a growler gives just the right flexibility!
Obvious icon. As I write this it's a gloriously sunny Friday evening, the lawnmower & tractor are cooling & resting after a hard afternoon's work... and so am I. Cheers.
My first PC had a 40 MB hard drive, I felt a bit hard done by when rebuilding the system I found it was actually 42MB if you used the auto-config option rather than pick one of the predefined ones you got at the time - How things have changed, I have two 8TB drives sitting waiting for me to pull out an old server and rebuild it as backup system.
IBM (real-deal) PC XT (8088 @4.77Mhz, 640k RAM, Hercules monochrome video card, dual 5-1/4" full height 360k Floppies.), had to remove 1 floppy to make room for hard-drive. I think it was about $2500 USD circa 1987?
1st upgrade: added hardrive
5-1/4" full height 20 MB Seagate (don't remember the model), using MFM (Modified Frequency Modulation?) controller
2nd upgrade:
replaced MFM controller with RLL (Run Length Limited) controller and same drive now 30MB ! Woohoo.
Now, those weren't the days...
The first pc I used with a hard drive was an XT with a 10
Meg full height hard drive. IIRC, it used dos 2. Then, at another company , I used another IBM PC , with a 30 Meg western digital hard card.
My mum and dad had an Amstrad 1512 that had at some point been upgraded with a 20 Meg hdd. Obviously I got to solve problems on that, and had experience on all sorts of pcs at work, including an AT running digital research multi user dos, with a wyse terminal plugged in..
My first personal hdd was a 120 gig seagate in my Amiga, and I wondered how I would ever fill such a massive drive. I now have terabytes of ssd in my pc and it’s not enough, tbh.
When you low-level formatted that drive, did you use the option some disc controllers had to split that 40 MB into two 20 MB "virtual" drives ... because MS-DOS at that time could handle no more than 32 MB per physical drive?
Mine's rhe one with floppies containing copies of MS-DOS DEBUG and CORE International, Inc.'s COREtest* programs.
(*COREtest does not test RAM; it tests hard drives.)
The first PC (as opposed to Atari 8-bit) we got, it was stupider than that. Instead of splitting it as 20+20, they split it as 32MB + 8MB! (That had DOS 3.3, which still had the 32MB limit. I went Linux right after that, but on the DOS side went right to DOS 5.0 after that, FAT had a 2GB limit still there (but I was running like a 420MB hard drive by then so the size limit was not a problem yet). If I were retrocomputing, I'd just run FreeDOS since it supports exFAT directly, probably 2TB limit due to MBR partitioning, or 256TB if it does support EFI (I have no idea if it does or not.)
Then I gave that 40MB the workout of a lifetime -- when I threw Slackware onto my 16mhz 386, I moved the ST250R into that. Linux bypassed the 8-bit BIOS on the card and got about 3x the speed and thrashed it about 3x as fast as it ever did under DOS. I was installing Slackware off 5.25" floppies I was downloading over dialup so it took a while to fill that 40MB disk up even with 8MB swap (I had 4MB RAM at that point.)
Fun fact -- /dev/sda etc. were strictly for SCSI disks back then. IDE drives showed as /dev/hda etc. (they decided to run SATA drives through the SCSI layer so they showed as /dev/sda etc... then kind of ported the IDE drivers over so they also showed as /dev/sda etc. at that point, /dev/hda etc. were retired.) The RLL "XT controller" was none of the above -- it showed up as /dev/xda!
I raise you a Western Digital Filecard: a full length ISA card with a 10MB hard drive and the controller.
Back when the average HDD was around 1gb, I worked for a company who used an, even then, ancient computer system that wouldn't recognise a HDD partition bigger than 20mb, so whenever we rolled out a new POS for a store, or rebuilt a dead one, the first job was to partition the shiny new 1gb hard drive down to 20mb.
When I enquired why we didn't update the software, I was told the guy who wrote it, and the only person who understood the completely undocumented code, had been fired, so now we were stuck.
This is possibly why that company no longer exists...
Someone will beat this - but... My first proper "none-CPM/or/Apple" machine at work was a Data General with 2 (count them, 2!) disk drives - A fixed Phoenix drive with 5MB and a removable 5MB as a "temporary measure". It was replaced by a 25MB rack-mounted Winchester drive with a 1.2MB 8" floppy for install/backup/archiving/restore. Somebody from DG Sales came out to find out why we needed "such a large amount". I discovered that I could fill a 1.2MD disk with one days work. Eventually the large colour Tektronix monitor that came with system was helped when I got an ancillary screen that was actually a genuine IBM AT with a massive 30MB HDD and a VGA(?) screen running terminal emulation software, with the option of transferring data from the DG to the AT.
The 25MB + 1.2 unit was something. I remember in the late 1980s hearing a "spang", which was at once followed by a system panic. The rubber belt that drove the drive had fallen off.
And they would corrupt their own disk ID block, requiring one to get in with a low-level editor to fix it.
Might you be thinking of a Hawk drive? those were 5 MB fixed plus 5 MB removable. I thought Phoenixes were a follow-on with slightly more capacity.
David Lovett (Usagi Electric) restores this and other tech from the 50s-80s. Hawks were the main storage for the Centurion minis he loves. I really recommend all his content.
youtube.com/@UsagiElectric
https://github.com/Nakazoto/CenturionComputer/wiki/CDC-Hawk-Drive
At least there's a pretty clear example of the general design philosophy documented in the bug reporting system. Almost like saying "It works the way we say it does, why is that a problem?" without considering why "the way we say it does" might not be right or even worth questioning.
This just supports my view that systemd has grown much larger than any of us ancient hacks originally imagined. In the beginning, it made sense to come up with a system that could replace maintaining all those init files. Now it has grown to be larger than Linux itself.. Regrettably, it is getting much more difficult to find a non-systemd Linux distribution.
Early systemd looks exactly like a copy of the Solaris SMF - which fixed a number of issues with the init system - dependencies, parallelism etc.
I recall there was discussion at Sun about whether it was too big a change to the traditional init system. Systemd took that big leap as a starting point and has spent the next 15 years trying to devour all of Unix.
I have no idea whether that was always the plan or it's just a case of feature-itis.
I have no idea whether that was always the plan or it's just a case of feature-itis
That's the problem. You start out with a simple goal, let's fix this stupid init file situation and take advantage of the multiple cores at our disposal to run things in parallel where possible.
So now you have something that has to manage a lot of dependencies, so it has a bit of complexity to it but you probably think you implemented that in a pretty slick way. You start seeing other things that done poorly that this slick system you set up could handle, so you add them, which which probably requires adding a bit more capability to your slick system, making it a bit slicker in your eyes. The more capability it gets, the more things it could "do better" which get brought under its control. Its like a black hole growing larger and larger as it swallows suns, giving it more gravity to influence space further away than before.
The black hole didn't have a plan, it just grew based on what moved within its sphere of gravitational influence.
"Its like a black hole growing larger and larger as it swallows suns"
Nah. It's clearly a cancer.
Consider: systemd takes root in its host, eats massive quantities of resources as it grows, spreads unchecked into areas unrelated to the initial infection, refuses to die unless physically removed from the system, sometimes kills off important subsystems at seeming random, all the while doing absolutely nothing of benefit to the host. That sounds an awful lot like a cancer to me ...
So do what I do and call it the systemd-cancer. Short, descriptive, accurate, has been known to scare management/moneybags away from distributions containing it ... what's not to like?
"I have no idea whether that was always the plan or it's just a case of feature-itis."
I'm sure I catch hate for this, but as Ballmer noted, the way to deal with the Linux problem is to:
1) Embrace the technology and community
2) Extend functionality in small useful ways to gain support and trust
3) Extinguish once the critical mass of support is reached, destroy the system from the inside.
Sounds pretty on point looking at the actions of SystemD creators and maintainers, both of which are Microsoft employees.
Just speculation, but if it ever rang true...
"If Boccassi's name is unfamiliar to you, he is the chap who came up with the pithy line "now with 42% less Unix philosophy" which we reported in our story on the release of systemd 256 last week.
No, it did not originate with systemd daddy Lennart "Agent P" Poettering. We do note, however, that Boccassi is Poettering's colleague at Microsoft."
And Leonard Pottering created SystemD, bothS employees
"almost universally disliked by those using it on customer sites."
The only people I ever met who actually claimed to like it were kids who learned it as their first variation on the theme. Even inside Sun, the staff outside the SMF development group and management almost universally hated it ... to the point where a couple friends of mine were threatened with firing for "trash talking" it in a meeting.
AIX had a habit of using /etc/inittab rather than /etc/init.d files. Also a system like SMF called System Resource Controller which let one group related services. I found it quite useful. Also light on resources. It wasnt a replacement for init. Digression. AIX ps aux command used to show the NOOP component of process table. On a quiet machine this would show up as high figures on process 1. User given root acccess by manglement killed process taking all that CPU. No guesses what happened.
Pity IBM is obsoleting AIX. Best unix like OS I have used for commercial activities. As for systemd, see Halloween Document
> Regrettably, it is getting much more difficult to find a non-systemd Linux distribution
Slackware. As well as being systemd-free it also has far fewer "What the hell were they thinking?" moments than any other distribution I've tried.
There's also Devuan, but then you've got to deal with the nightmare that is the Debian packaging system.
Another systemd free distro is PCLinuxOS.
Forked from Mandrake a long time ago PCLOS still has many of the utilities that made Mandrake so easy to administer.
It's my distro of choice and if you are happy with a conservative rolling release distro then I think that it is well worth taking a look at.
Texstar has stated over and over that PCLOS will never use systemd and that makes me feel a lot happier when I hear of all the incidents caused by systemd.
Add Void Linux to the list. I’ve been using it for years and there’s not much not to like. I originally started using it because it is free of system-dreadful
If it ever gets to the point where this crap is completely unavoidable on Linux then I’ll definitely be jumping back to BSD
Forked from Mandrake a long time ago PCLOS still has many of the utilities that made Mandrake so easy to administer.
Is that alive? The style of its web page looks like it it is still 2000....
Anyway, I loved Mandrake back in the oughties, maybe I should give PCLinuxOS a try.
Plan9 is an interesting OS ... I've been running it on one box or a dozen, and in one guise or another, since it was first made available. To date, I have found absolutely no use for it at all, except as a tool to learn about OS design, and as a curiosity. I used it as my main writing platform for about a year (coding, documentation, contracts, the books I'm writing, longer posts to ElReg, dead-tree letters, etc. ... ). Honestly, I gave it a good solid chance, but I'm back to Slackware.
Plan9 is the poster child for a solution looking for a problem.
But I like the silly thing. I want to find a use for it. Maybe someday.
"it made sense to come up with a system that could replace maintaining all those init files."
All those init files just stay there doing their job. Very seldom do you have to "maintain" them and if you do they're so transparent as to function that you can develop and test from the terminal, stepping through them if necessary.
"Regrettably, it is getting much more difficult to find a non-systemd Linux distribution."
Devuan.
So they built their own version that is an even bigger unplanned mess.
They still think it's better because they still never bothered to go back and understand what they ripped an rewrote. They just build a new one slapped together without a clue, a shred of compatibility, and whatever random syntax they thought of at the time.
Not thinking about the user community, ever, is pretty much their repeat fail. This clown is a perfect example. Lazy F'er couldn't be bothered to update his documentation, made a dangerous breaking change without thinking, didn't check in with anyone before deploying it, immediately victim blamed. Systemd team culture in a nutshell.
Deleting someone else's home folder isn't what people want out of a "cleanup script" when somebody accidentally fat fingers the return key before finishing typing in all of the arguments. The thing is a footgun as designed. Instead of building a new tool with reasonable scope, and appropriate behavior, this idiot just added bloat till he broke something.
Looking at the follow-up article on /. it appears that he learned too well at the feet of the master when it comes to user support. Other systemdefilers, including the master himself, seem to have come out with comments to the effect that this wasn't a Good Idea.
> In the beginning, it made sense to come up with a system that could replace maintaining all those init files.
Not even in the beginning. And now we have to maintain all those systemd config files instead.
Who couldf possibly have seen that coming? Oh..., yes, everybody.
Perhaps bluca should have dogfooded the systemd user experience first-hand with his own data before posting that.
So now purging tmp files is the duty of an Init system, not, say, a cron job with the cleaner of your choice?[1]
Or am I just so totally out of touch that I've not realised that Init means "every single second of your system's runtime, even when it is running stably" and not just that bit at the beginning with all the messages on the console?
[1] and far simpler ones than this beast, according to its manpage. Although, some of those options do look really useful, if you find that the rest of the Init keeps dying 'cos the processes invoked keep messing up their own files/dirs and need them to be quickly recreated before said process is started. Not that anything in a systemd based setup would ever need such patching, would it?
Not just tmp files, it feature creeped to do a bunch of other things but it's still called tmpfiles because the maintainers can't be bothered to change the name and until now the purge option still deleted everything it now manages as if they were just temporary files.
It is probably fairer to say that systemd isn't an init system. It contains one, but the project as a whole appears to be an effort to rewrite every Unix daemon from scratch and make the new versions the installation default for that particular purpose.
Viewed from that perspective, the obvious option is to use the init system or not according to taste, but avoid all the re-writes for a decade or two until they've had a chance to bed-in.
I don't know if any distro has taken that approach. I haven't heard of one. Perhaps it is impossible, in which case systemd's own init process is clearly unfit for purpose.
That was my initial (get it?!) plan. Pick which systemd bits I liked (init and convenient way to parse system logs on a single host) and not use everything else: systemd-timectl - rubbish. NetworkManager - very rubbish, etc etc.
Trouble is there isn’t a ‘choose the systemd bits you want’ option in any distro installer and I certainly don’t have the time to make my environments non-standard, might as well roll your own at that point. It’s a shame - some great ideas and some poor ones all overshadowed by some really unpleasant responses from people who seem to have weird axes to grind.
At least they haven’t had a go at package management… yet.
It's not weird, it's FOSS idealism functioning in the real world. Someone thought they could "do better", just like a fork, and created a project to suit their own very personal agenda. Coding ensued.
The problem is that it grew, which really wasn't a problem in regards to the FOSS belief; the problem is that it has been adopted by other systems without user community feedback or approval. The maintainers of the distros did a very *un*FOSS thing and imposed a system without option or choice.
What this shows is, really, the fundamental weaknesses of the 'ideal' of the FOSS system. The general public think they are being given a choice when choose FOSS but in reality you are simply getting someone ELSE'S premade choices to choose from, which is fundamentally not really different than any *other* system you can use. You have only more choices due to the absolute number of forks, created by every Tom Dick or Harry who thought they could "do better". The choices are a bit of an illusion, the systems are simply an assembly of premade modules...that, mostly, are created by someone else.
So all these wonderful wizards from On High have decided that they like systemd, too bad for you that you don't get a word in, [as usual] we really don't CARE about your opinion (see: tmp --purge feedback. Because deleting files from /home that isn't a "/home/temp" is ALWAYS a good thing...).
And that's what an 'unregulated' FOSS system gets you - the Wild West of programmer's ideas, ideals and beliefs with little oversight of someone looking over their shoulder and saying "Yes, that's a STUPID idea".
Fair points, I'd also point to one of the core flaws of the team, which is a committed opinion that only THEIR opinions matter. They in this case being the community making code commits to THEIR project.
Not the Kernel team, not the commercial distros that pay for a big chunk of the work, not the universities that originally granted most of the code to the community and designed the core, the philosophy, and taught most them.
Not the hyperscalers or the server wrangling greybeards holding large parts of industry together.
So why not constantly introduce breaking changes all over the place, because none of those people matter if it runs fine on the programmers laptop and they like it better that way right?
The word be damned.
The only point of yours I disagree with is that the SystemD team in any way embodies "ideal FOSS development". At this point they are a well documented embodiment of one of some of the oldest anti-patterns. SystemD should be taught in grade school and one of the cannonical examples of the blob. The people that made this mess should be forced to break it back up into pieces, and further attempts to spread the cancer to additional functions should be block from the upstream distros till they get their sh*t together.
Sadly, the community lacks collective organization or internal accountability structures to address out of control Devs and Dev teams like this. Linus basically kicked them out of the Kernel some time ago. His leadership model probably isn't a solution for the big distros, but something that gets the same job done is needed outside the kernel team just as badly.
My response is "Exactly", to both of you. Jake is blind to problems of FOSS as an apparent True Believer. FOSS can be great...but it can also be an unmitigated disaster, and sometimes the difference is only based upon which developer is doing what. With little to no community feedback on some projects - or, even worse, a complete dismissal of user feedback because of ego or other issues - you can end up with useful FOSS, or simply a /home-deleting mess. Two decades of "But so many eyes are on the code! FOSS can't fail!" yet plenty of bugs and security blunders are recognized now.
It is NOT a guaranteed fix it. FOSS has benefits...FOSS has problems. Grow up and acknowledge this so you move forward rather than putting your head into the sands of denial.
Having the component that manages temporary files expand until it's managing the user's home directory is emblematic of systemd's dysfunctional development culture. Mission creep, feature spread, taking over arbitrary new things at random, and then acting like that's what you're supposed to expect from poorly-named tools. It's only a matter of time before systemd absorbs /etc and any other system config into a "unified binary format" and "common configuration interface". This new systemd-config-register will then presumably expand its role to store user email.
Bleargh! That sounds as appealing (appaling?) as the YAST configuration file (does SUSE still have that? for me it was one of the reasons to ditch it).
Still have an hour at work, and the secret stash of those -->
is empty, otherwise I could fight my PTSD with it.
Meh, I guess the young ones will be able to cope with that madness, I won't[+]. I won't touch it at home, nor at work (it helps that I'm not the sysadmin of the productive systems and my test systems can be any Linux I like).
[+] Yeah, let them repeat the mistakes, or - more likely - come up with things that are in fact better than what we had. As long as I'm not bothered by their stupidity (ok, our elders, who we did not respect contrary to the narrative we like to spread, had to suffer as well, so what goes around comes around, let's be honest, folks!)
Bocky Buffoon: - "Stupid fsck-ing users! How dare they assume my delicately and perfectly balanced code is robust?!"
R*cky R*cc*n: - "Err, Bocky, maybe we should make it a bit more user friendly - I mean, it is userland out there, not our trusty AI water cooler."
Agent P: - "Haven't we got far enough through embrace and extend to start extinguishing the OS-who-shall-not-be-named with poisoned code yet?"
R*cky R*cc*n: - "Well, Devuan downloads are still healthy enough."
Agent P: - Oh, bugger, all right then. Bocky, see to it."
Bocky Buffoon: - "Mutter, mutter, nobody appreciates my genius around here."
* Apologies to a certain Mr. McCartney.
I saw it in operation in Clairmont Tower, being used by Dr Lindsay Marshall to copy the Keel kernel mods. source code to a tape drive on a different machine (it maintained UNIX filesystem semantics across the network including for devices) for me.
I also heard the complaints about it's impact on the 1Mbs Cambridge Ring that they used for the networking in the computer lab.
I can't remember exactly where it was, but somewhere else in Newcastle, I saw it being used on a PDP-11 running Microsoft Xenix. One really clever thing was that as long as you had the comms. device in the kernel, the rest of the implementation was done in the C library by changing a small number of the library stub routines for things like open(), creat(), cd() and a few more. You did not need the complete kernel source to add it to a system.
It was very cool.
Ach. Keele University Kernel mods that moved the I/O buffer space for DMA devices out of the Kernel address space, by using the 18-bit Unibus map to make them available higher up in memory when doing block I/O.
This free'd up space in the Kernel address space that could then be used for additional device drivers, something very useful on non-seperated I&D PDP11s like my 11/34e. Before installing them, I had to chose to either use a kernel with the TU11 tape drivers configured in, or the DZ11 8-port serial card for extra terminals. It also allowed more DMA I/O operations to be in flight, because I could set aside space for more buffers.
Firstly, the Newcastle Connection (a.k.a. Unix United) needed a common name space across all of the systems taking part in the connection. I really don't think that you would be able to enforce that up beyond your highest node in the environment. And IIRC, root was handled in a special manner. I don't know whether they got to the point of mapped names and UIDs, like Samba does.
Secondly, the Inter-what? In 1982/3 when I was shown this, outside of the core Arpa/Internet servers in the US, wide area networking was UUCP/Usenet/Janet/Prestel plus some bulletin boards (and core Internet access was Telnet FTP with maybe some SMTP, Archie and Gopher). While things like NFS did exist in the '80s (I saw that operating in 1985), nobody would even think about exposing the systems running filesystem services to wider area networks. I was aware of TCP wide area networking from back in 1981, when one of the research students at Durham University hopped around the world from server to server, landing back on the Newcastle MTS system, with a latency of several seconds, but it wouldn't have been usable with any fileserver protocol..
Thirdly, and most sadly, the Newcastle Connection was before it's time, and never gained sufficient traction (while NFS did). There were moves to port it to some other platforms (one research project I was on the periphery of stated it was trying to add it to CP/M over serial lines - obviously it failed), and it's been consigned to history as an interesting exercise that never really got anywhere.
The kids removed the soc.genealogy hierarchy which finally silenced a sorm of spam which had been running for months. Unfortunately either multiple attempts to explain how to post to Usenet had failed to get through to the Groupies or,more likely, the spammers had driven everyone else away.
And why did other distributions so readily adopt it?
25 years ago we nicknamed Suse "The Windows from Nuremberg", and Redhat was being referred to as a 'job creation program' (not a bad thing in itself, but): 'for anyone who shouldn't be let anywhere near a keyboard'. It begat a new cohort of Linux 'experts'. And don't get me started about the Debianauts or the Ubuntans.
Short of rolling your own (if you have the time), Slackware is the only one I can think of which still leaves you in control. (But there you have to actually know what you're doing - and if you do, it makes for happy users, and also means that when your phone rings, it is only for a change request.)
(Just in case: I've worked - not just used - with Minix, Coherent, A/UX, Irix(MIPS), Almos tImitating uniX,SunOs4.1 and upwards, various BSD flavours, and of course, various Linux distributions (where I still have fond memories of Yggdrasil - the sole reason I learned Tcl/Tk))..
"and also means that when your phone rings, it is only for a change request."
In MeDearOldMum's case, it's usually an invitation to tea. She's been a happy Slackware user for coming up on a couple decades now. The last time I remember her needing technical support was about a year ago when I installed a new printer for her. Prior to that was 2018, when I installed the previous printer (she's afraid to plug new hardware in for herself).
In a word: Gnome.
Gnome these days depends on SystemD. There's a lot of RedHat controlled resource working on Gnome and systemD, and they've seen fit to create that dependency. If you want your distro to have Gnome as an option, you have to have systemD. And unless you're totally stark raving mad, your server spin of your distro is going to be much like your desktop version. So, that gets systemD too.
It's interesting to consider if RedHat has any alterior motive for bending all Linux distros this way, by using it's control over dev resources to out-code competitors. What might RedHat actually do? Well, having got the Linux world irrevocably stuck with systemD, they could decide to stop making their copy of the source code publicly available (just like they've done with RHEL). The result is pretty bad for other distros, if RedHat chooses to internally fork these things, and emits only binaries to paying customers.The public fork falls behind pretty quickly. The combination with RHEL now being (effectively) closed source is quite severe too. Basically, if IBM bought Ubuntu too and pulled the same trick, then Linux = IBM, and (effectively) closed source. Would global competition authorities step in and prevent such an acquisition? I doubt it.
Oh there may well still be a Linux kernel project publishing their source as they do today, but if the two major branches of Linux fall under IBM's control then effectively they get to choose what kernel users are actually able to run, and it won't be free.
One problem with your cunning plan: According to Linus himself, Linux (the kernel) is not now, and never will be, dependent on the systemd-cancer. And both the SysV and BSD inits are freely available to bolt onto the Linux kernel. And the entire world of GNU tools are available, none of which require the systemd-cancer to work properly. And I'm pretty certain x.org will never mandate the systemd-cancer.
In other words, there will always be a way to roll out a Linux distro that will be free of the systemd-cancer (and GNOME. (And Wayland, which you somehow forgot.))
Or, you could just run Slackware.
Linux (the kernel) is not now, and never will be, dependent on the systemd-cancer.
Poettering and the rest of satan's little helpers have a completely different point of view and are hell-bent on making it prevail. No matter. As the systemd cancer spreads, it's inevitable its crufty bloatware will soon assimilate the kernel. Assuming it hasn't already done that.
One day soon systemd will replace modprobe and/or control what goes in /modules. Or for bonus points, Poettering decrees kernel code - or even the boot loaders - doesn't get to run unless systemd says so. The systemd fanbois could make the kernel silently dependent on libsystemd, like how that fucked up ssh recently. I could come up with more examples but I choose not to.
The thing about systemd is it presents so many attack surfaces, it can infest anything. Listing them all would take too long. We're only constrained by our imagination. I try not to think about how to exploit the systemd horror show because it's bad for my mental wellbeing.
Oh I know it will be possible, the trouble is assembling the resources to build an alternative distro and maintain it. Presently a lot of the resources that do this are in Ubuntu and RedHat. The former is vulnerable to acquisition by the latter.
Slack ware may indeed garner a lot of interest if IBM ever does go nuclear...
Except, with Agent P and cronies defecting to The Opposition, IBM may have been outmanoeuvred by them for a second time.
Personally I think absorbing the kernel into SystemD is a great idea. It will guarantee a kernel fork and new branding for the cancer. They could call it Android or something. The erst of us can then get on with our lives in peace.
> RHEL going (effectively) closed source.
Oh stop it. Stop now and never do it again.
No it blasted well is not.
You want the RHEL source? Get a free developer account.
Want the current WIP RHEL-point-next? CentOS Stream. Source here:
https://gitlab.com/redhat/centos-stream
Want more current than that?
https://src.fedoraproject.org/
It's all right there, same as it ever was. What you can't grab is the _oldest_ version which the company spends $millions hardening and tightening up.
Which is 100% fair, legal, free, FOSS compliant and reasonable.
Ther GPL says:
«
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.
»
*The recipients.* That means for commercial software the buyers. Read the GPL: it is admirably short and clear.
Not "the world". Not "everyone". Not "anyone who asks". Only the customers.
https://www.gnu.org/licenses/gpl-3.0.txt
RH is a profitable company and is under no obligation to give its commercial products away.
Just because it did for years... just because folks got used to it... confers no rights. They have none. What they have is a feeling of entitlement.
Tough. There's a thousand other distros that aren't commercial. Go use one.
Or, use one of the two separate 100% free distros RH offers gratis: CentOS Stream, which Meta runs on, or Fedora.
I am no fan of RH and do not use any of its products but it has done nothing wrong.
Hmm ...
$ systemd-cancer --help
systemd-cancer --infect <exe> [OPTIONS]
OPTIONS:
--ignore-established-usage [DEFAULT]. Completely ignore the normal way of doing things.
--honour-established-usage [NOT IMPLEMENTED]
Replaces <exe> with systemd-<exe>. Breaks all existing usage.
You wouldn't, but it's hardly likely that a distro outfit like Ubuntu are going to have a Gnome/systemD based distro for desktop, and a very different systemD-free server distro. If they want their desktop and server offerings to be from a common basis, the server version suffers whatever it is the desktop version has to have to be a desktop version.
[Author here]
> Gnome these days depends on SystemD.
No it does not.
GNOME on FreeBSD:
https://forums.freebsd.org/threads/freebsd-screen-shots.8877/page-73#post-551955
... and on OpenBSD:
https://www.reddit.com/r/UsabilityPorn/comments/qr07tx/openbsd_70_gnome_404/
... and on NetBSD:
https://www.reddit.com/r/NetBSD/comments/mdm8j9/preview_of_gnome_40_on_netbsd/
Included in Alpine which has neither systemd nor GNU libc:
https://wiki.alpinelinux.org/wiki/GNOME
"I wonder why?"
Because the clueful know that so-called "Xenix" was just a rebadged, bog-stock, AT&T PDP11 UNIX Version 7, ported to other architectures by third parties (For example, SCO[0] did the popular x86 port).
[0] No, not the SCO of insane litigation fame. Not really, anyway.
SCO ported Xenix (which was nothing but a rebadged, bog-stock, AT&T PDP11 UNIX Version 7) to the IBM PC's 8086/8088 architecture in roughly 1983. Most of us yawned ... although looking back, it was a pretty good hack by SCO! (Consider, for example, that the IBM PC didn't come with a MMU ... ). AT&T's lawyers decided that jealously guarding the UNIX trademark was a good idea, thus the rebranding into Xenix.
Microsoft didn't own the code, they had a license to use it, and to to sub-lease it to third parties, from AT&T, who wasn't allowed to sell to the commercial/retail market for anti-trust reasons. There were other ports, usually done by the hardware manufacturers, for TRS-68000, Zilog Z8001, and Altos 8086.
Incidentally, SCO also did the Apple Lisa port of Xenix.
I posted this about systemd on El Reg over 6 years ago before the Microsoft thing.... How can we make money?
A dilemma for a Really Enterprise Dependant Huge Applications Technology company - The technology they provide is open, so almost anyone could supply and support it. To continue growing, and maintain a healthy profit they could consider locking their existing customer base in; but they need to stop other suppliers moving in, who might offer a better and cheaper alternative, so they would like more control of the whole ecosystem. The scene: An imaginary high-level meeting somewhere - The agenda: Let's turn Linux into Windows - That makes a lot of money:-
Q: Windows is a monopoly, so how are we going to monopolise something that is free and open, because we will have to supply source code for anything that will do that? A: We make it convoluted and obtuse, then we will be the only people with the resources to offer it commercially; and to make certain, we keep changing it with dependencies to "our" stuff everywhere - Like Microsoft did with the Registry.
Q: How are we going to sell that idea? A: Well, we could create a problem and solve it - The script kiddies who like this stuff, keep fiddling with things and rebooting all of the time. They don't appear to understand the existing systems - Sell the idea they do not need to know why *NIX actually works.
Q: *NIX is designed to be dependable, and go for long periods without rebooting, How do we get around that. A: That is not the point, the kids don't know that; we can sell them the idea that a minute or two saved every time that they reboot is worth it, because they reboot lots of times in every session - They are mostly running single user laptops, and not big multi-user systems, so they might think that that is important - If there is somebody who realises that this is trivial, we sell them the idea of creating and destroying containers or stopping and starting VMs.
Q: OK, you have sold the concept, how are we going to make it happen? A: Well, you know that we contribute quite a lot to "open" stuff. Let's employ someone with a reputation for producing fragile, barely functioning stuff for desktop systems, and tell them that we need a "fast and agile" approach to create "more advanced" desktop style systems - They would lead a team that will spread this everywhere. I think I know someone who can do it - We can have almost all of the enterprise market.
Q: What about the other large players, surely they can foil our plan? A: No, they won't want to, they are all big companies and can see the benefit of keeping newer, efficient competitors out of the market. Some of them sell equipment and system-wide consulting, so they might just use our stuff with a suitable discount/mark-up structure anyway.
I know I'm asking a stupid question in asking, "what is systemd-tmpfile actually for?", like I'm expecting some logic.
The suggestion I've gleaned from the man page for both it and its config file format is that you'd use it as a general purpose file system environment creator for some service or other. E.g. if a service wanted a directory and a named pipe in it, you'd create a configuration for systemd-tmpfile to create those and systemD would be able to clean them up afterwards.
Except that I'm left wondering why on earth one would ever want to create the prerequisites that way in the first place. Why wouldn't the service simply be coded up to create what it wants for itself, and to clean up what it no longer needs? If, as a service developer, I decide "no, I'm going to use systemD for this filesystem environment creation purpose", I'm now needlessly and (so it would seem) complicatedly dependent on the behaviour of the tool. That's a bad thing because it seems the behaviour is still evolving (hence, people getting caught by surprise). And, my service's file system environment can now be tinkered with by the system admin in ways that could be wrong preventing the service from operating properly in the first place. None of those things can go wrong if my service does all this creation work for itself, taking no more than a single parameter for the path where the user wants all these file system objects to be created.
I just don't see the point. Anyone got any believable reasons why one would actually want to use it?
The rationale goes like this:
developer of service xyz (systemd unit), which needs "a directory and a named pipe in it" uses its own way to create temp dir like xyz_<currenttimestamp>/ but that may not work in some obscure context, maybe in a container or whatever.
The missive from systemd HQ is to rely on systemd-tmpfile instead where systemd guarantees it works for you. The consumer of tempfiles does not have to deal with edge cases any more. Overzealous --purge notwithstanding.
Are they implying that the people who packaged the OS are at fault, since tmpfiles.d shouldn't have entries for things that aren't, you know, temporary files, like one's home? Or are they reminding us that life is fleeting and that nothing is really ever permanent?
By their logic, it wouldn't be unexpected if running "rm" with no options removes everything in the current directory... Not sure I like that thinking.
I wondered that too since as a Debian dev I saw the initial reports of this on IRC and initially thought it was due to some Debian specific packaging, but it turns out this config was added upstream NINE years ago in 2015. Reading between the lines of the commit messages I suspect this is part of the systemd focus on creating/generating, and starting the host with immutable full disk images that include partition table and all file-systems.
Looking at the 2 commits it looks like the original intention was simply to ensure these directories are present on boot, but a side-effect of a --purge is they're also all cleared! Not nice that it could include remote file-system mounts since this specific file includes /srv/ but there's other vital directories listed in the commits (`/var/ anyone ?)
systemd$ git l tmpfiles.d/home.conf
822cd60135 2015-10-22 01:59:25 +0200 N Lennart Poettering tmpfiles.d: change all subvolumes to use quota
fed2b07ebc 2015-04-21 17:43:55 +0200 N Lennart Poettering tmpfiles: make /home and /var btrfs subvolumes by default when booted up with them missing
git show 822cd6013 fed2b07ebc tmpfiles.d/home.conf
commit 822cd601357f6f45d0176ae38fe9f86077462f06
Author: Lennart Poettering <lennart@poettering.net>
Date: Wed Oct 21 19:47:28 2015 +0200
tmpfiles.d: change all subvolumes to use quota
Let's make sure the subvolumes we create fit into a sensible definition
of a quota tree.
diff --git a/tmpfiles.d/home.conf b/tmpfiles.d/home.conf
index aa652b197f..9f25b83392 100644
--- a/tmpfiles.d/home.conf
+++ b/tmpfiles.d/home.conf
@@ -7,5 +7,5 @@
# See tmpfiles.d(5) for details
-v /home 0755 - - -
-v /srv 0755 - - -
+Q /home 0755 - - -
+q /srv 0755 - - -
commit fed2b07ebc9e8694b5b326923356028f464381ce
Author: Lennart Poettering <lennart@poettering.net>
Date: Tue Apr 21 17:28:16 2015 +0200
tmpfiles: make /home and /var btrfs subvolumes by default when booted up with them missing
This way the root subvolume can be left read-only easily, and variable
and user data writable with explicit quota set.
diff --git a/tmpfiles.d/home.conf b/tmpfiles.d/home.conf
new file mode 100644
index 0000000000..aa652b197f
--- /dev/null
+++ b/tmpfiles.d/home.conf
@@ -0,0 +1,11 @@
+# This file is part of systemd.
+#
+# systemd is free software; you can redistribute it and/or modify it
+# under the terms of the GNU Lesser General Public License as published by
+# the Free Software Foundation; either version 2.1 of the License, or
+# (at your option) any later version.
+
+# See tmpfiles.d(5) for details
+
+v /home 0755 - - -
+v /srv 0755 - - -
"These tools are written and maintained by small teams of mere humans, and humans mess up occasionally."
More to the point, they're used by humans with the same limitations. And it's a bit much to hide behind "It does what it says on the tin." when what it says on the tin is "tmpfiles".
I think I'll continue to stay systemd-free.
Ironic at first sight, but really it's an object lesson in system design.
The old init systems were/are chronically clunky, with a mildly complicated set of processes, scripts and config files passing stuff around. To mix metaphors, SystemD set out to cut the intractable knot with a clean sheet of paper. It ended up being even more obscure and convoluted than before.
SystemD will fail, not because some Evil Empire seek to embrace, extend, extinguish, but because it has been wrong-headed and chasing rainbows from Day One.
[Author here]
> Seeing as the desktop environments are increasingly tied to Systemd, its going to be difficult to move.
[[citation needed]]
No, they are not. GNOME runs on all the systemd-free distros, as well as on FreeBSD (and maybe the other BSDs; after all GNOME 2 ran on Solaris). GNOME is the default desktop of Chimera Linux which is not only systemd-free, it's GNU-free. It uses the FreeBSD userland on a Linux kernel. Its developer "Q66" has some excellent posts explaining *why* systemd is being adopted in the mastodon thread I linked to in the article.
KDE is from an entirely different organisation and runs on NetBSD, FreeBSD, OpenBSD, etc.
None of the others have any links to systemd at all.
You are repeating ill-informed speculation and fear-mongering. Please don't,
[Author here]
> But don't confuse it with chimera which is the latest release of Devuan.
No it isn't.
1. The current Devuan is v5, Daedalus:
https://www.devuan.org/os/announce/daedalus-release-announce-2023-08-14
2. The previous version was v4.0 Chimaera
https://www.devuan.org/os/announce/chimaera-release-announce-2021-10-14
That was in 2021.
3. Note, "Chimaera" not "Chimera".
[Author here]
> After seeing the posts from q66.
> I'll avoid chimera linux from that twat
I've met him. He is very smart, highly opinionated, and doing more to create a modern Linux userland that's free of all the modern cruft than anyone else. Do not dismiss him so easily or idly.
Hmm, well, it's rather tenuous. From what I've learned, Gnome can use (reddit link) either systemd-logind, or Consolekit (or at least, it did so 11 years ago). ConsoleKit was reported dead 11 years ago, the last maintainer being one L. Poettering.
Gnome apparently uses various systemD utiliites for configuration.
Three years ago it was reported that Gnome builds on FreeBSD, except that FreeBSD (another reddit link) provides some patches that allows Gnome to be built without the systemD libraries. The last update seems to be more than 2 years ago, which would be a bit odd if Gnome could simply be built unpatched on any common or garden *nix system.
Ten months ago I've seen reports that, yes, gnome ran on Devuan, but the small handicap was that (yet another reddit link) nothing could be run, not Firefox, nor Brave.
The Wikipedia article on Gnome says that Poettering himself wanted to make systemD a dependency, which is probably where the fuss all started. And indeed it seems there is a deliberate policy of full-fat Gnome being dependent on it. For example, there's no doubt a *nix way in which Gnome could have supported multiseat configurations, but no, they've gone and relied on systemD for that.
Ok, so that's a bunch of unreliable reddit links and a more reliably cited Wikipedia article. However, it seems far from safe to say that "Gnome runs on all systemD free distros". Patched Gnome evidently does, as would stock Gnome atop reimplementations of bits of systemD that it wants (hence, elogind...), and the Gnome project has anyway set out to provide only basic functionality without systemD. This is a somewhat removed from what other desktop projects do, which is to be pretty much *nix OS neutral.
Gnome 2 predates SystemD by 8 years, and Solaris had Gnome 2 long before systemD came along. I used to use that (ah, the memories). Solaris running Gnome 2 is in no way an indicator that the BSDs can trivially build and run Gnome 40+ in the modern era.
Regardless, as a means of influencing other distributions, Gnome is a potent tool. RedHat are the largest contributor to the project, it's one of the more monied projects, and it can probably out-develop all other OSS desktop projects. If RedHat wanted to they could drag the project off in a direction that makes it even harder for non-systemD OSes. This is isn't a matter of "they'd never do that"; RedHat are a confirmed, profits-at-any-cost corporation, clearly with no intention of sticking by the accepted norms of OSS where it really counts. They've gone from being a company making money from support to a company making money selling binaries built from what is largely other people's source code. If they can get commercial advantage by manipulating Gnome or SystemD, they probably will.
From the point of view of other distro projects that wish to survive long term, relying on RedHat playing "nicely" seems ill-advised.
From https://en.wikipedia.org/wiki/Saltzer_and_Schroeder's_design_principles :
Fail-safe defaults: Base access decisions on permission rather than exclusion.
That's the second of them. Before that we come to:
Economy of mechanism: Keep the design as simple and small as possible.
There's also:
Least common mechanism: Minimize the amount of mechanism common to more than one user and depended on by all users.
So much fail.
The main thing that systemd gives to the world is a repeated middle finger ... !!!
Why do so many clever people drift towards meglomania when they have an idea that is 'so great' that 'lessons learned' do not apply !!!
Systemd *may* have been an interesting idea to investigate *but* now it is no more the 'right way' than the 'init file hell' it was trying to replace.
Overloading the functionality endlessly to be 'clever', default behaviour that is less than obvious and now we end up with a 'different' set of unknowns to the init files that you forget the syntax/contents of !!!
It is simply a case of 'My Monster' AKA 'systemd' is the correct way to do things because I created it & my ideas cannot possibly be wrong !!!
:)
Correct me if I'm missing something but systemd was initially conceived to address the fact that the existing init systems were all pretty much single-threaded - and could be a little tricky to configure correctly, what with daemons having interdependencies and all - and to ease the maintenance of the init system.
Fact is, *Nix boxes running on bare metal generally don't reboot often - and when they do, it's almost always planned. It follows, then, that one can usually time when the system does its thing to minimise disruption. As to the maintenance: once init processes have been configured correctly they don't really need 'maintaining' so one has to ask if systemd really adds much value to such systems.
However, faster booting really comes into its own in one of two scenarios: server virtualisation - especially in a flexible scale, hosted services model - or when your base OS is too tied-into the user application layer that instability in the latter causes the OS to break (I reflect on the many, many times I had to reboot a Windows box because an app. running on the desktop had crashed. I don't know if this is still a 'thing': I've not used Windows for, oh, ten or more years). In these situations, having an OS that can rapidly start/restart is surely a Good Thing. But hold on: if I'm virtualised, my client OS shouldn't need to spend too much time faffing about during its boot process because the environment its in is already able to service requests, is stable and, therefore, very predictable. As to the problem of client-side apps killing the base OS? Well, common sense suggests that userland should be abstracted from the base OS anyway - something *Nix has generally always managed rather well.
I foresee the *Nix community splitting into two camps in the not too distant future: many will accept systemd and the slow creep of its functionality, violating the mantra of do one thing and do it well as it goes - these mostly won't care/understand/wanting to understand how the OS does its thing (the 'Windows users of *Nix', if that tickles your fancy) and rest will quietly migrate to a BSD where they can exercise full, fine-grained control and so achieve exceptionally stable and reliable environments.
I'm running Linux without systemd on a laptop. I shut it down when I'm not using it si reboot several times a day. It helps that the system drive is SSD but even without that time to reboot is not an issue, neither is maintenance of the init system.
Likewise, Dr. Syntax: I start my daily driver(s) as required because I see no reason to waste power leaving chunky hardware running idle 24/7. No systemd in sight on any of them and don't recall ever having any serious issues configuring init processes - but we're not the typical target audience I suspect.
The fast-spin-up PaaS people would certainly benefit from quick start times - but as Liam Proven discussed fairly recently, FreeBSD has been shown to be able to start rather briskly and that doesn't use systemd so it's clearly not a required subsystem if fast booting is your goal. To get to this impressive boot time, I suspect they optimised the whole init process (which you can and should do if your aim is to offer PaaS because, well, you want to maximise your profit margins or market the hell out of having ultra-responsive load management) - which proves systemd isn't a requirement if you want to streamline, either.
So. The daily power on/off user sees no practical benefit from systemd. The PaaS houses certainly don't need it - and I can't imagine many folks running on prem server services shutdown regularly enough to have an issue with a serialised init process taking a few seconds longer than a new-fangled parallel one that appears only to increase the attack surface and presents new and interesting problems!
I know sysInit isn't perfect but this 'cure' seems infinitely worse than the disease!
Honestly, this once again sets off my feelings against systemd adoption, and how this went under the radar for far too long. As much as I like the functionality of the systemd init and core management systems, this part feels like the old proverbial 1 step forward, but 2 steps back as far as how we as users have felt treated regarding systemd adoption and the rush there was to start using it.
The era of HDDs is dead and gone and now SSDs are everywhere and boot times as less of an importance. We need reliable software and having a feature/bug, or whatever you want to call it in Ubisoft terms, that can nuke data on this scale is never a good sign. Poorly documented features have no place in software, especially with anything running PID1, the foundation of the system. There was no excuse for this. Red Hat and Poeterring pushed this mess like a mad hatter onto every distribution they could. The real question is, how long had this been a problem? Day 1? Day 100? Day 10000? How long was this bad software in the system waiting to cause problems?
Another implied hhgtg reference
"There’s no point in acting surprised about it. All the planning charts and demolition orders have been on display at your local planning department in Alpha Centauri for 50 of your Earth years, so you’ve had plenty of time to lodge any formal complaint and it’s far too late to start making a fuss about it now. … What do you mean you’ve never been to Alpha Centauri? Oh, for heaven’s sake, mankind, it’s only four light years away, you know. I’m sorry, but if you can’t be bothered to take an interest in local affairs, that’s your own lookout. Energize the demolition beams."
[Author here]
> If you don't have time to "read the docs in full" you should not be allowed to use the command line. Maybe you should get a Mac?
No, I disagree.
It was not a smart thing to do, but I see his point: there's a bit of systemd that manages tmp files, and it has a purge command, and the docs say it purges the files in wherever it was told to keep temp files.
That is not horrible user error. That is not PEBCAK.
It was incautious and foolish but it wasn't stupid.
The command was badly implemented, it was badly named and never renamed when they knew they should rename it, and the command should have rejected an empty argument. Which it now does. The command should have warned him. It didn't. And it should be explained in the docs what it does, which now are at least better.
Everyone involved should do better, frankly. Devs, documenters, and users. All have learned from this.
But probably not enough.
I wrote an anti-systemd comment to the following video on YouTube: https://www.youtube.com/watch?v=Kzpm-rGAXos
My initial comment was quite benign. I wrote something like, "If I wanted to use systemd, I would just switch to Windows instead." To his credit, the creator of that channel, Learn Linux TV, left my comment up. A couple of people liked my comment, but a couple of people also left pro-systemd replies which were largely devoid of any facts or arguments.
I replied to the first pro-systemd comment directed at me with the headline and a link to this story. That comment was never allowed up.
A couple of weeks later, another pro-systemd commenter replied to me. Again, I posted the headline and a link to this story. I also made some opinions which I still think were not very harsh. Yet, once again, the creator of "Learn Linux TV" appears to have not allowed my comment to go up.
So then I edited my original comment which did stay up. I added an Update to my original comment where I wrote that I was being censored repeatedly. I also included the headline and link to this article (this, the third time!).
A few minutes later, and it now appears that the creator of "Learn Linux TV" deleted my whole comment thread! I did a quick search for systemd in the comments for this video; none of them say anything but good things about systemd.
I am shocked that someone who creates videos wanting people to learn Linux would be so heavy into cancelling people whose opinions he disagrees with.