
Thinks I like about systemd
(nil)
Systemd maintainer Lennart Poettering has committed code for RC1 including a huge number of new features. Releases tend to come around every four months, with the last being Systemd 248 on 30 March. It is an alternative to the Linux init daemon but with much greater scope; its documentation describes it as "a suite of basic …
Alas systemd is a trigger-word:
Fuck off. It's a pile of shite. I yet again had something to moan about: systemd-tmpfiles. Please, please, please let ME manage my machine. I DO NOT WANT /tmp emptied on a reboot. I had changed that in my current OS but an upgrade has uncommented the /tmp line in /usr/lib/tmpfiles.d/tmp.conf
I use /tmp and only want to delete things when I AM READY. Not after a reboot. Cunts.
Fuck off, fuck off, fuck off...
Sorry. :-(
Yea, and my Raspberry Pi that I installed the 'official image on' (Debian based) for my CUPS server, SystemD fucks that up each time it's updated - for some reason it starts the CUPS service before networking so the service daemon fails, and I have to edit the bloody file to make it start after networking - and the 'Restart=on-failure' bloody well fails too, doesn't do anything.
For info ref file: /lib/systemd/system/cups.service:
[Unit]
Description=CUPS Scheduler
Documentation=man:cupsd(8)
After=ssh.service <------ add this line here.
[Service]
ExecStart=/usr/sbin/cupsd -l
Type=simple
Restart=on-failure
[Install]
Also=cups.socket cups.path
WantedBy=printer.target
I don't update now as it just *works* and does the job. I wish I installed Slackware on it like my other two Pi's.
Ah, thanks. Being a Slackware guy I couldn't understand what the hell SystemD was doing, and also WHY the hell it reverts back the service files to virgin with NO backups!
OK, that's it. I will re-image with my old Slackware image file and start again. It does what it's told.
Slack just works, almost everywhere. I invite anyone who hasn't tried it for a few years to take a close look at slackware-current. It's Slackware's version of a rolling release, but is currently the Beta of what will become the long awaited 15.0 ... I've been running it on several machines and it's rock solid. Give it a spin, you'll probably be glad you did.
And yes, it is still without the systemd-cancer.
I still use BSD on most servers, though.
No, they weren't.
Often they had a housekeeping script, triggered on boot and sometimes in cron which did an ordered cleanup, often based on the type and age of the files. Others that didn't normally had half-competent sysadms who would write their own scripts.
These being scripts were easy to find, disable and/or modify for exactly what you wanted to do, and were not replaced during an update.
Why do I get the feeling that the balance of people here is skewing away from the greybeards to millennials.
What part of "user space is inviolate": do you not understand?
It should never be the machine's decision to delete user files. It's up to the user who put them there to make that decision. YES, a program can and should be able to delete it's own temporary files. Absolutely. But it should leave any files that it does not own alone. Likewise system processes. Any blanket file deletion is inherently evil and bound to break stuff in user space eventually. Just say no.
Having a user space application inform the user that the /tmp location is getting full and offering to empty it is a good thing. This way the user is informed of a potential issue before it happens, the user is offered the opportunity to easily fix this (i.e. delete contents of /tmp) and the responsibility for the action is passed onto the user rather than assumed.
However, any application that assumes that data in a /tmp path will always be available is a very poorly written application. If an application requires semi-persistent local storage then it should use a suitable location for this. If an application fails to operate because the contents of the /tmp path are no longer there then this is a fault of the application, not the OS. This isn't to say that an application shouldn't use /tmp for storage, but it should be able to recreate whatever is in there.
FYI per the spec, /tmp is specifically for stuff that might not survive a reboot and /var/tmp is for stuff that should survive a reboot.
systemd should keep its hands of both, /tmp should not survive a reboot if, and only if, its a tmpfs. *No code needs writing* for this "feature" Pottering should just stop writing code and RTFFFM.
Pottering "invents" users in a simple json file? Linux already supports a user defined in a file, everything was defined in a file until systemd. Pottering insists on reinventing the wheel, only square, with round tyres.
Someone needs to sit this man down and introduce him to Linux.
"FYI per the spec, /tmp is specifically for stuff that might not survive a reboot and /var/tmp is for stuff that should survive a reboot."
It's not a specification, it's a recommendation, and it goes on to further state that it is up to the cognizant systems administrator to make the choice system to system.
"/var/tmp is for stuff that should survive a reboot."
Do not rely in this
I'd never done an install without reformatting the partition holding /var until the other day when I decided to test this notion. It failed due to a permission error citing a non-existent user name. In Debian-based systems at least, /var includes a lot of stuff relating to the installation. Admittedly the error came with an explanation that it may be a packaging error but the fact remains that the installation process is likely to have been designed assuming an empty directory.
If, by intent or bad luck, your reboot involves a reinstall your vital stuff in /var/tmp or anything else in /var cannot rely on its survival. Given that Apache and MySQL default to placing user data there it's safest to make their directories symlinks to real directories elsewhere such as /srv. Assume /var is for system stuff only.
To be fair, surviving a reboot and surviving a reinstall are two completely different things.
Any system should survive even a random reboot with all user data intact (except stuff in RAM being worked on that hasn't been saved, of course). Systems that do not are b0rken, by definition.
Before doing a reinstall, save a copy of all important everything. In fact, it is best if one assumes that the installation routine will assume it has a blank disk to work with, and act accordingly. EVEN ESPECIALLY if the vendor claims otherwise.
On the other hand, we all have all our important data properly backed up, right?
In the gripping hand, beer. Sometimes it's a useful portion of the answer.
... why on earth are you editing things in /usr/lib and expecting them *not* to get changed by upgrades? That's got disaster written all over it, and has on every distro from Slackware on. Mess with /usr at your peril: it belongs to the distro's package manager. /etc is yours, as is /usr/local, but /usr is the distro's.
systemd has a whole scheme for letting you make changes like this and have them persist: copy /usr/lib/tmpfiles.d/tmp.conf to /etc/tmpfles.d/tmp.conf and edit that: those changes will override distro changes and will not be overridden by upgrades. This is the same for every single configuration file in systemd, and is spreading to other applications because it's such a good idea.
Writing this one off as user error.
User shouldn't have to to do that. It should be done automatically. At the end of and upgrade to a service file like this, the user should be asked (K)keep, (O)Overwrite, (B)Backup et el.
I read a lot learning this shit, but never read what you just said. I guess changing to something that is crap is better nowadays.
Oh, and what on earth are config files doing there?
Did you try RTFM?
https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html#Configuration%20Directories%20and%20Precedence
I'll quote "Packages should install their configuration files in /usr/lib/tmpfiles.d. Files in /etc/tmpfiles.d are reserved for the local administrator, who may use this logic to override the configuration files installed by vendor packages. "
I think the problem here is everyone who runs Linux (especially around here) thinks they are the local administrator and therefore the rules don't apply to them.
And they then get annoyed when things start breaking.
To be honest, a) these rants are pretty fucking hilarious - you guys make the Breitbart crowd look like a bunch of well mannered moderates, and b) as a very very casual user of Linux/macOS these rants pretty much sum up my entire experience of trying to get anything done on those systems - nothing works quite as you expect and it takes fucking ages to figure out why something is broken. Usually to the point that I say 'fuck it, life is too short' and give up. My latest example: Why does my Linux machine only connect to my network at 100Mbs when all other machines manage to connect at 1Gbs? I could waste a few days searching online for an answer and editing some obscure config file in the arse-end of the filesystem, but at the end of the day, I really shouldn't fucking have to.
The thing is, if you run a *nix as your personal desktop, like it or not you ARE the administrator. Some admins have more clues than others. Some with fewer clues decide to learn. Others not so much. And yet these admins without clues who donlt wish to learn always seem to be the ones bitching about it. Perhaps if you took the time to learn the functions of your general purpose personal computer you wouldn't have the issues that you have? It's a poor craftsman who blames his tools.
I'm a software engineer. My tools are Visual Studio, Visual Studio Code, Android Studio, Xcode, clang, gcc, and various other profilers, editors, and support utilities.
I shouldn't need to know all the intricate details of Windows/Linux/Whatever to write software anymore than I need to know the inner workings of my car to be able to drive it to the shops.
I need to know how to use available APIs to get the job done, and I need to know how to drive my car.
I shouldn't have to waste time learning, for example, how to reconfigure virtual memory settings so Ubuntu doesn't randomly kill applications while I'm using them (been there, done that - never needed to do that on Windows) in the same way that I shouldn't need to adjust the fuel injector settings* before I go shopping.
* If even such a thing is possible. I honestly don't know, nor do I care enough to find out.
Hmm. Protecting the OS in a multi-user, multi-tasking OS is an important thing.
As far as I am aware, the out-of-the-box virtual memory system in most Linux distros is designed to keep the system running by sacrificing greedy applications. This is so that other users of the system will be less affected when the whole system hangs because of resource overcommit.
Using Linux as a personal system may change this core desire, as the system may just be that greedy application as far as you are concerned, so this could be undesirable behaviour. But I've seen many Windows systems brought to their knees for similar problems, and the normal remedy is to reboot the system...
It may be that in your case the application is more mindful of how to manage it's own memory in a Windows environment. I assure you that it is possible to write an application to care for it's own memory usage in Linux, the tools are all there, and allow it respond to requests from the OS to trim it's memory image when there is contention.
I have problems with exhaustion of memory on Ubuntu, but that is mainly because of the insane and resource hungry behaviour of some applications. Many applications just don't manage their memory usage, and just keep grabbing more and more memory. And with things like Firefox putting tabs in isolated sandboxes for security, making it's memory footprint much larger than it strictly needs to be (I feel that using processes with shared re-only text segments would be a better model than sandboxed threads). Once the paging space is exhausted, any OS needs to take serious actions to recover.
For the record, the applications I've had killed on Ubuntu were the command shell (I forget what it's called), SmartGit, and Visual Studio Code. On, and clang periodically ran out of memory when trying to compile our code base. Basically the only applications I've ever used for more than a few minutes. The problem was (appears to have been) that the swap file was set to a paltry 2GB which, unlike Windows, doesn't expand automatically when more space is needed. On Windows, I believe the paging file will consume as much drive space as is available if necessary - as long as an upper limit hasn't been set.
It's actually very hard to get Windows to fall over like that these days, and believe me I've tried. Back in the days of Windows Vista and early Windows 7 there was a hard limit of 128GB of memory (actually virtual address space) an application could use. I don't know what the limit is now, but I'm 96.2% certain it's higher in Windows 10, and very few applications even come close to that.
Linux supports dynamic swap files stored on a filesystem like Windows uses. It's just not the default way it is set up in most distros (and Linus actually pulled kernel support for this feature temporarily during a recent release candidate because someone introduces a bug that corrupted the filesystem containing the page file when paging occurred. It's back in now.)
The reason it is not installed on most Linux distros is that putting a page file through a filesystem is actually much slower than putting it on a native raw partition, but of course as soon as you use a partition, it is fixed in size. Similarly, the Ubuntu installer will by default only set up a small paging (swap is such an outdated term) partition. You can change it during the install, but I guess you just accepted the defaults.
I have to admit that after a little digging around, I found that Linux does not support anything like the SIGDANGER signal that AIX has had for 25+ years (AIX 3.2.3 I think), so the first time an application knows that it's regarded as a transgressor is when it receives the untrappable SIGKILL signal. I'm very surprised, although there have been many suggestions to add it. Long standing advice suggests using cgroups to trap memory conditions, which is more than I would expect a normal user to have to configure.
I don't use Windows that much nowadays, so my experience is a little dated, but I wonder whether your Linux may be similar.
I resized the page file size just last week after I finally got fed up with the OS disappearing my applications. :)
I don't use this Ubuntu PC too often. It's just a work machine mainly used for testing and bug fixing - the target platform for the product I'm working on is Linux, but we mostly develop in Windows because that's where our technology stack has lived for the last forever years, and, well, why wouldn't I? :)
(Ironically I was half of the team doing the port of our tech to Linux because I stupidly admitted I had Linux development experience on my resume. Which was mostly true - from a developer perspective.)
In Unix you understand the files and thier format rather than apis. You can write apis if you understand the format or you can use a simple text editor.
What you don't grok is the same thing that Pottering does not grok...
He spends all day spitting out code and apis instead of understanding how simple files like /etc/passwd and /etc/resolv.conf are infinitly superior to any code, daemon or api he has ever written.
Bad developers worry about the code good developers worry about the data structures and their relationships.
Linux people know this. Any tool can be written fast and cheap in bash if the data is in files rather than at the end of an api.
Git was written by Linus in 16 days in bash. Essentially git _is_ the contents of ./.git
Visual Studio coders wait for Microsoft to give them apis and tools.
Linux users get annoyed if someone insists we use their api and tools.
Umm, no.
When I say APIs, I mean APIs. POSIX, XWindows, OpenGL, Vulkan, et al.
I write real software, not text parsers.
In this particular moment in time I'm working on a 360-degree augmented reality video solution for ship bridge systems and remote operation centres for autonomous shipping: https://www.kongsberg.com/maritime/about-us/news-and-media/our-stories/intelligent-awareness/
Ah, but systemd is best thing since sliced bread. Fixes "problems" with old init. No?
Yup, I never had any issues with old init either. Why yes, of course my x86 servers are FreeBSD or Devuan. Closest thing that resembles systemd I have is SMF on Solaris but at least that works and only does what its supposed to.
You're typically given that option for changes in /etc. But even though systemd gets on my nerves, I agree with this. Edit /usr/lib at your peril. And even if you disagree, it's an issue with the package manager (apt, rpm etc) not systemd.
I do share your pain with /tmp being cleared, but that's configuration files for you. About a year ago the default options for vim changed to make it utterly unusable. So I had to install a .vimrc on 30 odd machines to get anything done. Change can be annoying.
No no. /tmp is for users, that's why they invented the sticky bit. IMHO /var/tmp should be where the system does its stuff. /tmp was designed for users. Only later did someone come along and say reboots mean clearouts. That's wrong.
I store stuff that is temporary in /tmp. I don't know how long I'll need it. Maybe I'll not want it again, or maybe the PDF is very interesting, and I want to read it again over the next few months. Temporary is my decision, NOT the OSes.
refspecs (dot) linuxfoundation (dot) org / FHS_3.0 fhs / ch03s18 (dot) html
> Filesystem Hierarchy Standard
> LSB Workgroup, The Linux Foundation
> Version 3.0
> Copyright © 2015 The Linux Foundation
---------------------------------------------------------------
> 3.18. /tmp : Temporary files
> 3.18.1. Purpose
> The /tmp directory must be made available for programs that require temporary files.
> Programs must not assume that any files or directories in /tmp are preserved between invocations of the program.
> Rationale
> IEEE standard POSIX.1-2008 lists requirements similar to the above section.
> Although data stored in /tmp may be deleted in a site-specific manner, it is recommended that files and directories located in /tmp be deleted whenever the system is booted.
> FHS added this recommendation on the basis of historical precedent and common practice, but did not make it a requirement because system administration is not within the scope of this standard.
---------------------------------------------------------------
> 5.15. /var/tmp : Temporary files preserved between system reboots
> 5.15.1. Purpose
> The /var/tmp directory is made available for programs that require temporary files or directories that are preserved between system reboots. Therefore, data stored in /var/tmp is more persistent than data in /tmp.
> Files and directories located in /var/tmp must not be deleted when the system is booted. Although data stored in /var/tmp is typically deleted in a site-specific manner, it is recommended that deletions occur at a less frequent interval than /tmp.
I think that "did not make it a requirement because system administration is not within the scope of this standard" covers the OP's preferred use of /tmp more than adequately.
Note that 5.15 only discusses what programs (not humans) can do with junk in /var/tmp and specifically states that it can be handled in a site specific manor.
And quite frankly, it's my machine. I'll decide what gets deleted on reboot, thank you very much. If I want it automated, I'll fucking automate it. Anything that potentially deletes wanted files is evil, by definition.
"What a pity, my laptop doesn't have systemd as init (*), I won't be able to profit from these great new enhancements"
To this day, I still do not know why the Debian Project voluntarily chose to adopt systemd all those years ago as the init system and in that one move, all the downstream distributions were also unfortunately affected/infested.
If I remember right, at that time it was sold as a modern INIT system, in competition with runit and OpenRC. Only later did SystemD become the monstrosity that it is today. Could they have known at that time that it will evolve as it did ?
But today they can't pretend anymore to not know, so I wonder now whether Debian will revert course or not.
It was already becoming the monstrosity it is today when the decision was made - against the wishes of the community, I should add. The controversy generated by the proposal should have been a reason to delay, but it was instead used as a justification to forge ahead and damn the opposition.
I was about to say I'm very slowly coming to accept systemd (note that's accept, not like) but the main thing that really infuriates me still is the need to use a completely separate util/command just to see what systemd is logging (or spewing) so I can try and actually figure out why a service won't start. As just telling me the error/output when the actual command is run is obviously far too useful/intuitive.
If ever there was a case making something more user-unfriendly or making it so much more complicated than it needs to be... this is a prime example.
Can someone explain to this FreeBSD fangirl what any of these new features have to do with initialising services?
In FreeBSD, we have different tools for doing all these things. Many of them are in the ports collection which means you don't install them if you don't want that particular feature in your system.
Well, that's what System V does - you have your service start files in /etc/rc.d/ and can change and order, add, remove, turn off, turn on, etc. what services you want.
SystemD goes against everything in the UNIX philosophy of having 'one binary to do one job, and do it well, and pipe together the tools to get the proper and required result. SystemD attempts to do everything. It almost seems to be designed for idiots that are not interested in their computer (Microsoft stance).
What? sysv init scripts, both as originally implemented on sysv and as now present on (non-systemd) Linux distros are an almost wholly undocumented nightmare of barely-commented shell scripts rife with poor interactions and no error handling whatsoever. Even BSD single-rc-file was better: at least you could have conditionals that crossed multiple services easily.
sysv init scripts are a total mess and clearly crocked together from whatever pieces lay to hand at the time. No design was involved, and they're as far removed from the clean design of the Unix philosophy as the Windows kernel was.
I might not like systemd a great deal, but that doesn't mean I'm willing to engage in obvious lies to attack it. There are good reasons to dislike systemd. This is not one.
This post has been deleted by its author
Odd, my init based Linices have no problems with this. As for SysV rc scripts, never had issues in Solaris, AIX or HPUX. Linux, well yes, some are obscure in operation. However I am noticing config files being dropped all over place in last two decades, instead of just etc.
"And they were totally pissed off by being able to develop and de-bug a start-up script in the shelll - including stepping through it if necessary - and then just drop it into /etc/init.d."
Because shell scripting is so very, very difficult. So we'll write an entirely new interface, and do away with all that nasty text stuff (except where we don't). In fact, text files are so very, very difficult that we'll even make the logs binary and invent more new tools to deal with them! Never mind that it's not compatible with anything else in the entire system, it is obviously much easier this way. So easy, in fact, that we'll re-write as many other things that we can think of, no matter how peripherally involved with an init, just to make the entire system as big as possible. What could possibly go wrong?
One wonders what the authors of the systemd-cancer are smoking.
It is notable that one of the biggest users of Linux - Android does NOT use systemd. Google has rightly decided that systemd is unsuitable for prime time use in phones and tablets.
Small devices (eg media players and IoT devices) do not normally use systemd due to it needing far more resources (ROM,RAM and CPU time) than a simple init based system.
My own opinion - systemd is part 2 of the M$ "Embrace, Extend, Extinguish" method. I wonder how much M$ is paying the systemd team.
Icon for what should happen to systemd and its developers ========>
"I wonder how much M$ is paying the systemd team."
Probably nothing.
Personally, I wonder when IBM is going to notice that they are paying for the clusterfuck and put the kibosh on it. The systemd-cancer needs to be cut out of the Linux ecosystem before it manages to suck the life out of it.
What amazes me is how many other tool/utils maintainers just go "fsck everybody who wants to do things a different way, I'll just assume SystemD is installed". For absolutely no reason. Devuan has to include a libsystemd0 (or something like that) just to tell all those toys that there is no systemd; their ability to default to anything else without it being there is gone. Madness!
I'm just waiting for the day Poettering slips in a Win32 image as an easter egg.
I'll just assume SystemD is installed
Assuming that any resource is installed (apart from the essentials, of which systemd is definitely not) is lazy and poor programming.
That is hardly new; I have seen embedded systems where card inits (in a compact PCI rack) did not return error codes so I updated the code with the addition of a status word and as it went through each part of the initialisation it cleared the status bit associated with it. That was over 20 years ago with the target running LynxOS. My host was running Solaris.
Prior to that being implemented no-one could tell just what had not come up properly as subsequent commands to the system simply returned an error.
If the status word for the entire initialisation (I used a 32 bit word as it was easily sufficient for the specific needs I had) was non-zero it was trivial to find out where the init broke.
That code took me all of a day or so to write including learning the 'not for the faint of heart' ioctl(). Good investment in time from my perspective as I was one of the poor sods who had to figure out what was not working.
Decent software should handle errors gracefully and have clear error / status reporting.
It should also not, tentacle like, try and be more than the minimum necessary.
This post has been deleted by its author
People have been parsing syslog forever. While I’m not dismissing your point about non-English speaking users, like it or not (and I’m not saying it’s right) at a system admin level, the vast majority of stuff is in English, so singling out syslog doesn’t make sense. And reading a bunch of poorly/undocumented random numbers is no replacement.
And none of what you say negates the twattiness of having a binary logging format with a non-standard interface to it.
Plain text works. You don’t need any special tools to read it other than more or cat, it’s simple, it’s robust (any corruption tends to be localised and doesn’t bugger up the rest of it - what was idiot-head’s response to shiteD log corruption again? “live with it” or something like that!) and it’s a standard that everyone knows. ShitedomD just shits over all this.
This post has been deleted by its author
Being curious and an old time linux user i went over and checked out the documentation.
A) I read through one of the pages and told myself " .. this is sooooo wonderfull .. Next time im drunk ( unlikely to happen anytime soon ) ill simply go back to the site and put that page back on .. it's going to induce vomiting and ill be able to drink some more ..
B) As i further read i realised how insidious it is and how much must pass through systemd .. limitations , calls , recommendations
It is to vomit ! ..
It wins first place in the " All time Mongolean Cluster F*** " category.
I understand developpers and people that steer clear of this. Systemd wants total control of everything that goes on in the computer. That aint a boot loader.
That's a project gone insane. OH ! BY JOVE I GOT IT .. Donald Trump must be heading the project from the shadows.
Because before systemd nothing and nobody could handle passwords, no one every managed to log into any machine. Systemd fixed this problem once and for all.
In other news Poettering once had a problem logging into his laptop and thought "We could just gently stroke this subsystem with a big fucking chainsaw. Hmmm... MOAR CODE. BRILLIANT!"