I've read the Canonical & Microsoft blog posts about the change. Other than saying that SystemD needs to be PID 1 and previously WSL was PID 1, neither say really what changed.
A related question is: Why did WSL need to be PID 1?
Linux distros running on Windows in a WSL2 virtual machine now can use the systemd init system. This week Microsoft and Canonical jointly announced the news that the latest build of Windows Subsystem for Linux 2 (version 0.67.6 and higher) has been modified to support systemd. Canonical's blog post has some technical detail, …
Nonono the worst thing that SystemD did to Linux, was to make it more like Windows. If only SystemD would move over to Windows completely it would be like the perfect pairing and I, for one, would be very happy for them. That is they would be happily living OVER THERE and I would be happily living OVER HERE.
As someone who started out as a Windows admin, I have to admit about half of systemd makes sense to me. Half of it seems to be wilfully over-complicated for no obvious benefit though.
(When Windows is over-complicated there is usually a reason. That reason is usually 'a massive customer wanted it and now everyone has to live with it', but hey, at least it's a reason.)
The big question is whether Pöttering is still responsible for maintaining systemd. If so, expect the Linux variery to acquire a dependency on a binary tree-structured key-value store for "registering" configuration information in. I imagine that it will have separate branches -- let's call them "hives" -- for systemwide and per-user data, and make heavy use of opaque GUIDs as key identiers.
-A.
I was wondering if it might be an idea to have some sysctl configurables to designate which PIDs to use for various stuff like that, until I actually stopped to think about it for a moment and realised it would be a stupid idea. It seems that some people don't reach the part after the comma before they start coding.
Interesting, thanks. FBSD (which is where I usually write stuff) doesn't have it but Linux does; the younger me would've dived and and tried to find a job for it but I think I'll stick to reading about it now. At least I will once my current migraine subsides and makes room for the new one that I'll probably get!
No, not Fortran, and yes, a good question.
PID 0 is owned by the kernel, specifically the process that keeps an eye on memory.
Traditionally, the init was the next process called during boot, so it defaulted to PID 1 .... later, as the kernel grew more complex and had to call a few other processes that required PIDs, technically the init might have received PID 2, 3 or 4 (or whatever), and indeed I worked on early systems that did this. Thankfully, in order to preserve sanity within the system, wise heads decided to reserve PID 1 for the init.
WSL is a little odd in that you normally just start it by typing "bash". Within seconds of first invocation you have a bash prompt and away you go. It didn't "boot up" in the normal sense, WSL just mounted the filesystem, and emulates a kernel and pid 1 to the bash so it runs fast. It runs even faster if you type bash a second time from another terminal because it's already started the instance.
The problem comes with actual services / daemons in the background and then you discover they're not running because as I said it went straight to bash. There is no systemd because it wasn't used to kick off bash and none of the init style scripts ran either. You can still do something like "/etc/init.d/xrdp start" manually if you want but it's not done until you tell it to do so.
It seems like Microsoft have just tinkered a bit with the startup. There is a wsl.conf that tweaks the behaviour of the WSL instance and now you can tell it to kick off systemd first. Aside from making services work it also enables snaps too which were dependent on systemd.
AD integration, SAML connections, GPO functionality, COTS desktop apps with only for Windows, Office and Azure apps (yes I prefer LibreOffice, but we're talking corporate here, and there's lots of Azure connectivity apps not yet ready for Linux), hyper v images (allowing migration to the Azure cloud), user/automated patching ...
Easier to get support staff for Windows, easier access to training, common platform, most likely the same desktop they use on their home computer ...
Don't get me wrong, I'm a UNIX/Linux guy at heart, but there are good solid reasons for running a Windows server and desktop system
"Keep telling yourself that, and it'll remain true. For you."
^^^ Ha! -> +1 / Beat me to it, I was about to post the exact same phrase.
But yes and here's the deal ®:
Using Linux? Good, it's a big step in the right direction.
But stay away from all that systemd crap Poettering came up with to infect the Linux environment and install a systemd-free distribution.
Need to run a discontinued application you seldom use and will only run on Windows?
Set up a sandboxed VM with your MS OS of choice and your're where you want to be.
No need to have a Windows install to then use a Linux system.
Like I have said before:
---
Systemd is a virus, a cancer or whatever you want to call it. It is noxious stuff.
It works just like the registry does in MS operating systems.
It's a developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the host with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what is going on inside your box as it becomes more and more obscure.
Systemd is nothing but a putsch to eventually generate and then force a convergence of Windows with or into Linux, which is obviously not good for Linux and if unchecked, will be Linux's undoing.
There's nothing new going on here: it's nothing but the well known MSBrace at work.
Now go and tell me that Microsoft has absolutely nothing to do with how systemd is crawling inside/infecting the Linux ecosystem.
---
Have a good week-end.
O.
-> Keep telling yourself that, and it'll remain true. For you.
The thing is, it is true for a lot of people. Let me tell you something, lend me your ears. While I am happy to see more people using Linux, a lot of those people are low-end users. That is the inevitable result of making Linux on the desktop more "automatic". They think they know what they are doing because they have been using Linux Mint for 5 years, using LibreOffice to write a few letters, using Firefox. They think they are UINX people.
The problem comes when those people are dropped in the deep end with Linux servers and they don't know what they are doing. They are almost averse to the command line. They have never seen an "OK" prompt. They are at much the same level as average Windows users.
I have seen the horrors of apt's utterly confusing errors and conflicts. I know how to deal with that. A lot of Linux users do not know. Instead they have been led up (or is it down) the garden path that Linux just works. Yeah, until it doesn't. That is when the Linux on the desktop people get separated from the old UNIX hands.
With Linux the lessons learned tend to apply for life, whereas with Windows they only apply until the next patch?
Hmmm, as a BSD user (where 'ipconfig' still does what it's done for years) I often find Linux a confusing place when dropped into the command line, requiring a good Googling to get where I want.
I'm not sure it's any more consistant than Windows these days, not if you're an occasional user who goes for a headline distro.
More accurately it should have been *nix in general, on account having dwelled in many variants.
That said, Linux variants do indeed differ a lot. It appears we may either end up with Redhat or SuSE now on account of the new gear that's being bought so I will probably run the same (or the more open variants) at home, although I found OpenSuSE a tad incompatible with th erest of the world - that has probably more to do with my lack of expertise than the distro :)
I must admit to still needing to look up the odd bit of the new syntax, but in general it seems consistent and over the course of using it to set up a bridge relatively pain free.
I rather think it's an improvement, especially given the json output which makes pulling apart the output with jq a little easier than putting sed to work.
" as a BSD user (where 'ipconfig' still does what it's done for years)"
And that would be absolutely nothing.
ipconfig is a consumer-grade OS command.
Perhaps you meant ifconfig, which while initially developed on 4.1BSD, is fairly ubiquitous in the *nix world, and still works quite nicely on Linux.
"I often find Linux a confusing place when dropped into the command line"
I can see that ... but I don't think it is the fault of Linux.
@VoiceofTruth, So rather than talking in generalist terms, give an example of a deep cryptic error you've encountered with the apt package manager, and the commands you used to correct this, and an example of size 12 boots, where they used the wrong command, causing chaos.
To me, this is usual Linux is cryptic and hard , Windows is straightforward - argument, that really doesn't hold much water. Learning Windows Powershell is not exactly straightforward or without its many quirks.
By example if you have a Windows system that doesn't boot because of an updated corrupt core driver that loads even in safe mode (so safe mode doesn't boot), it's nearly always quicker to pull a previous full image and just write the current installation off.
The only way I can use Windows, is make a change, if that change, Backup, Backup, Backup.
I like FreeBSD because it is a far tidier OS than Linux distros today or indeed in the last 20 or so years. Linux has gone down a route of making certain things unnecessarily complicated. systemd is one such example. It's regrettable.
My complaint about more people becoming Linux users is not that they use Linux (I've said that is a good thing), but than a lot of Linux distros do things automatically for these users. The result is low end Linux users (akin to average Windows users), not knowledgeable Linux users. The problem comes about when some (quite a few?) low end Linux users think they are good UNIX admins when they are not. They have not been exposed to the fun and games that some of us older hands have. They know how to create an icon on a XFCE desktop. Great. Now get stuck into a nasty fsck problem without losing a load of files.
I wrote that I have seen this, and it's true - lower end Linux users steaming in like a bunch of steamers (remember them?), but they think they are good UNIX admins. There are good Linux users out there. There are also quite a few Linux users who have several years Linux experience but not actual usable experience.
"Try running Linux without the systemd cancer and see how far you get."
Oh about as far as Slackware or PCLinuxOS. They don't have systemd anywhere near them and so far as I know never will.
See, that's the joy of Linux. Don't like what you are offered? Then there is always somewhere else you can go.
"see how far you get"
You get to my favourite, Devuan, and Jake's favourite, Slackware, for a start, plus one or two others. They all work perfectly well. (From what I read it may well be that they have a problem running current versions of Gnome but that doesn't disturb my idea of "perfectly well" and there are quite a few others of the same view.)
Gentoo Linux can do "no systemd", but so much stuff uses it that they've written some shims that just do the things required (logind being one that too many things use). The full systemd "do everything for everyone" thing isn't required.
Most of the "systemd-less" variants use something similar so that more modern stuff can be used despite not having the whole of systemd's "take over the system" development paradigm included.
Whether this is a good, or bad, thing, is worse debating topic than the emacs vs. vi vs nano vs kate wars.
"The number of Linux distros *with* systemd tells me otherwise."
What, BOTH of them? That many? WOW!
Only two major distros adopted the systemd-cancer. That would be RedHat and Debian. RedHat did it for pure marketing reasons, essentially they were trying to be more like Windows. In Debian's case, it was an accident of history, in essence fall-out from a large internal power struggle. Today, the PTB at Debian know damn-day-well that they made a mistake ... but to admit it would mean a loss of face, so it isn't going to happen any time soon.
The rest of the distros to implement it, being mostly clones of those two, blindly followed due to ignorance and/or apathy, with a pinch of sheer laziness on the part of the devs. They certainly didn't spend any time thinking about the ramifications, beyond "I use that software repository, so I must comply".
SystemD is a third-party "app" and has absolutely nothing to do with Linux.
Neither does glibc, or bash, or X11, or Wayland, or gtk or qt or GNOME... Everything but the kernel is a 3rd party app that has nothing specifically to do with Linux. But without them, the system would probably be called Android, not Linux.
"I like FreeBSD because it is a far tidier OS than Linux distros today or indeed in the last 20 or so years."
I do not think you have the experience to be making such claims.
"Linux has gone down a route of making certain things unnecessarily complicated. systemd is one such example."
Repeat after me: "LINUX IS NOT SYSTEMD, SYSTEMD IS NOT LINUX!".
The systemd-cancer is not necessary for a fully functional Linux-based system. It never has been, and it never will be. All clueful admins know this.
I'm not saying I agree with it (I tend to prefer the extra stuff, within reason) but I certainly agree with that point; and I remain rather bemused by the architects of the ongoing attempts to rationalise the directory structure deciding to move stuff to /usr instead of away from it. *shurg* etc.
The BSDs are effectively derived from Research Linux v7.
V8 never saw release outside Bell Labs, AFAIK, and v9 was not finished... but v9 inspired Plan 9, and the numbering may not be a coincidence.
I would *love* to see a bunch of determined hackers seize Plan 9 and update it and try to make it into something much more Linux-like. It is considerably more lightweight and clean than any BSD.
But the successor to Plan 9 was Inferno, and Inferno has some excellent ideas in it as well.
There is room for a lot of modernisation of both Inferno and Plan 9, and I also wonder if there might be some way to effectively merge them into one.
V9 and V10 were definitely a thing, I have the source code lying around somewhere (it's free to download for the curious) though admittedly "finished" is a subjective term. IIRC, V8 also fed back into BSD but I'd need to check, and I'm not sure all the various lineage charts agree with each other anyway.
"V9 and V10 were definitely a thing, I have the source code lying around somewhere (it's free to download for the curious)"
It's in the TUHS archive. Try https://minnie.tuhs.org/cgi-bin/utree.pl
"though admittedly "finished" is a subjective term."
I'll go out on a limb and state that no version of *nix has ever been "finished", and none ever will be.
"IIRC, V8 also fed back into BSD but I'd need to check"
The cold war between AT&T's lawyers and the rest of the UN*X world was in full swing during the V8 era, and neither side even dared to talk with the other down the pub. Was not very fun; Contrary to popular belief, us old *nix hackers tend to be a quite gregarious lot when we get together ... However, things had thawed a bit by the time V9 and V10 came around.
V9 was notable by the inclusion of X11 for the first time. V10 was just V9 with some patches to V9's by then 3 year old code. This was all about the time 4.3BSD was in the beginning stages. The two worlds traded concepts, but no official source code, although unofficially code went both ways ... and was promptly "sanitized" as needed, so the lawyers wouldn't get uppity.
"and I'm not sure all the various lineage charts agree with each other anyway."
My memory is getting hazy as to the details, but the above is what I remember from being a grunt in the trenches during that time.
I would *love* to see a bunch of determined hackers seize Plan 9 and update it and try to make it into something much more Linux-like. It is considerably more lightweight and clean than any BSD.
If Plan 9 one day got turned into something much more Linux-like - the horror, the horror - by definition it couldn't possibly be more lightweight and clean than any BSD. One word proof: systemd.
IIRC Plan 9 was the Bell Labs response to the bloat and bad design choices found in both BSD and System V 30-40 years ago. [I say this as a devoted fan of BSD, which has remained my OS of choice since first using it ~1980.] In a way, Plan 9 was a return to the core principles found in V6 and V7 UNIX: a small, tidy OS that was fast and ran on modest hardware.
"tbf some people thought V7 is when the rot set in and preferred the much more nimble, lightweight V6."
To be equally fair, any time a rev is rolled on ANY software, somebody will bitch about it. Back in the day, only a few people heard the bitching. Today, the Internet acts as an amplifier and everybody hears about it ... even when it's just a few folks with complaints.
"I don't quite recall where BSD fits into the Unix family tree"
BSD started life at University of California Berkeley, as patches and additional tools for Bell Labs Research UNIX V6. Some of these changes found their way back to Bell Labs and were rolled into V7, and some of V7 went into later BSD. BSD and UNIX swapped code back and forth for several generations, until AT&T's lawyers noticed that UNIX was worth some money, at which point the BSD source was eventually sanitized, with all AT&T code rewritten from scratch by around the 4.3BSDs. This lead to the versions known as 4.3BSDTahoe... and Net/1. Then 4.3BSDReno and NET/2. NET/2 led led to 386BSD and then on to all the BSDs we have today, which at least to some degree continue the code swapping tradition.
By way of reference, the UNIX Wars occurred roughly between "lawyers" and "386BSD" in the above paragraph, with a small (probably ongoing) footnote from an upstart company with an assumed (purchased) name, known as SCO, happening later.
Also note that several other large companies (most notably Sun Microsystems, IBM and NeXT) and many Universities world-wide contributed to the early BSD work.
In a nutshell, you're not imagining things. It's a convoluted history.
"To be equally fair, any time a rev is rolled on ANY software, somebody will bitch about it. Back in the day, only a few people heard the bitching. Today, the Internet acts as an amplifier and everybody hears about it ... even when it's just a few folks with complaints."
Fair point. Seems weird that I got unanimously downvoted for that comment (unless people didn't read it properly and think I'm being all fangirly about V6 as "the One True Unix" or something). I mean it feels like the infamous Morrowind vs. Oblivion debate has been going on since forever but the Unix grudge-matches are much older; as is the "Unix is dead" thing, which I first heard upon my original encounter with Unix (well, Ultrix, for the pedants: kinda sorta BSD on a Vax) at college at the tail end of 1986 and it almost certainly wasn't new then...
"While I am happy to see more people using Linux, a lot of those people are low-end users. That is the inevitable result of making Linux on the desktop more "automatic"."
Those people not only don't care which init their system uses, they don't even know an init exists at all. This isn't a problem for them, because somebody else takes care of their system. As long as they have a network connection, a browser and an office suite (and whatever other "productivity" software they (think they) need), all easily clickable from their desktop, they'll happily point & drool. These people can use BSD, OS/2, Plan 9, Linux, OSX or whatever the Redmond/Cupertino/London triumveraate is pushing these days. It simply doesn't matter to them ... IF their system is setup by a competent tech. This is probably over 98% of the userbase.
The whole systemd argument is between mostly clueless fanbois and actual techies, with the fanbois mindlessly agreeing with the decision of their distro of choice, and the techies saying "Now hang on a sec, what exactly does this give me that I don't already have ... and what is it going to cost me in the long run if I switch?" ... From what I've seen, after giving it due thought almost all of the techies want nothing to do with systemd. The fanbois don't think, their thinking is done for them by their distro maintainer. For small values of "thinking". From here it looks like most down-stream distros are simply following the path of least resistance.
> there are good solid reasons for running a Windows server and desktop system
I'm sitting in front of a windows pile of shite right now, stuff that is completely unfamiliar to me, and unable to change the background colour on an excel spreadsheet because it's got the braindeadest UI known to humanity.
The only "good reason" I can think of is to improve my swearing in a variety of languages.
> What's not to like is people who use WSL do so for a reason - having Linux support in Windows is fantastically useful for developers.
>
> Snorting about how they should be running BSD or some random Linux dist is completely missing the point.
Maybe, but we're not there yet IME.
I've used Windows and Linux on the desktop corporately, with Cygwin, VMs, and WSL in the former case(s). Nothing matches Linux on the desktop when developing for Linux, with Windows in a VM if you are locked in to something that needs it. Modern tools for both are making it easier to manage things with Windows underneath but there are still edge cases, unfortunately.
For example, a) working in the shells for git causes problems with the DOS newlines - I've found myself employing "useless use of cat" to work around getting code into the system; b) when using a Linux VM on top of Windows it was necessary to be wary of whether files transferred via shared directories had gained an executable bit they didn't need or lost one they did; c) the ability to mix and match a toolchain built for Ubuntu systems with Windows source code control tools under WSL is nice until you find some executable the Makefile just built in your freshly cloned repository can't be stripped because of "permission denied" in the very next line of the same build rule.
Having said that WSLg looks interesting, as do tools with X servers built in - but it seems I have to cross my fingers the OS my IT department mandated has the right feature set to be able to install and use that sort of thing (...I'm expecting to make do with a VNC server in lieu...)
... what the issue is with systemd ??
I mean everywhere I've worked has used Centos, or mainly Ubuntu.
I can't say i've noticed any issues from using the systemd stuff?
Is this just a koolaid factor?
I mean, you could say that motorbikes are unnecessary, because a Penny Farthing does the job.
So, the systemd haters who motorbike to work - go to the office on P.F. - put your money where your mouth is....
Linux is full of functionally duplicate apps - it's just the way it is....
I actually had the same question, and this is the most insightful post I have come across.
TL;DR: it depends (sorry) :).
Other than it having been adopted by some distros a little too early in its development lifecycle with issues that needed sorting, not much. It's a fast, modular and easy-to-write service files for. Personally, I wouldn't go back to openrc (which I used to use and prefer), much less sysvinit, or any other similar ways of handling processes. The other tools that come with it—if enabled—are also generally useful in my experience and I have found myself using more and more of them over the years. Fairly good documentation as well.
As much as it may upset the little circlejerk that usually goes on in the comments of these articles, there's good reason that it's become the de facto standard init. You'll struggle to find non-tinfoil or non-religious practical arguments against it in its current state.
I don't really care about some idealized UNIX philosophy that hasn't ever really applied to Linux nor do I care about the often very personal-level criticisms of systemd's creators. It allows me to manage my home boxes and remote servers without really getting in my way and gives me powerful tools to modify things if needed. I can't say that I love it—it's just software, after all—but it very much is useful and that's the only bar it has to clear.
I would appreciate an example. I'll admit I'm not a sysadmin in a Fortune 500 or massive datacenter or whatever but I've been using systemd for over 10 years now for all sorts of deployments without issues. Never once have I thought that a shell script would be a better option.
The basic problem is that the stuff it does over and above starting a service are things which should be delegated to other system layers which can be swapped out or debugged more easily locally.
It's a layering violation and that makes people uneasy, the reasons for which I'm not commenting on, there are some which allow some integrations.
It does a job, and given its sprawling service offering, applications are now unhealthily dependant on features of the init system - which is painful for CI, since your system behaviour is now out of your control.
A shell script by contrast runs on vm, in a container, is interactively debuggable and can have tracing turned on with a single line. At this point the bug is in your script rather than the C code forming the shell, systemd puts it back in C again, with a side order of undebuggable units files.
So people have units which have horrible scripting rather than just calling an external script. It works, but it's a different mindset.
I'm rebuilding a system which has been making money for 7 years, as it's being moved to new hardware.
It has reboots on the order of months, upgrades are almost non-existent, it's stable and locked down to buggery.
I don't care that it takes longer to boot with an old init - as it happens the rebuild is using systemd, and it made things a little more concise which seems a pointless victory, it's managing a huge amount of code, the shell script it replaced was boilerplate which was well debugged across multiple platforms, and replaced it with a docker invocation, which contains the same script internally.
Using systemd as a supervisor for containers is not terrible, but in general, there are a lot of architectural reasons, hidden behind a layer of dislike for the foolishness of trying to remove shell scripts from Linux.
I realize you're not the poster I was asking for an example from, but thank you all the same for taking the time to comment.
You're right that for an old, stable system which will virtually never see upgrades or changes it doesn't make a practical difference. It may as well be an embedded system—so long as it boots, works, and is bug-free the internals don't really matter.
For whatever it's worth, I do see the value in shell scripts and have obviously written quite a few in my time. However, the poster who I was replying to specifically mentioned service files vs scripts and there's very large shortcomings with the latter approach in the context of getting the system read (at least how it's been handled historically at times). Service files are just outright better at handling dependencies and helping track points of failure in the init. What a target is wanted by, under which conditions it should run or not, user permissions, and all those little subtleties are handled in an easy-to-understand manner that's easily changed or modified according to deployment needs.
Fetching information about the state of a unit and what it's doing is also simple and there's plenty of in-built tools to analyze the information in various ways. But, I realize that we're talking about different things here. I don't have a problem with the scope of systemd being beyond basic service management and think that the upsides of more tooling and integration with other aspects of the system far outweigh the downsides. But that's a different wholly discussion than systemd being more difficult/unmanageable than shell scripts for complex setups.
I've managed to debug things with systemd, - most of the time I'm fortunate that I can nuke and pave until I have a reliable deployment recipe automated.
I don't consider the debug/logging experience has greatly improved, what has happened in my experience is all the complex stuff got moved into containers so that it could be handled without grovelling through journalctl.
Meaning that all that systemd does is start/stop/restart a docker container and everything else is handled with plaintext logs, started with scripts, and is generally just like it was before pottering got his hands on the keyboard.
"Start this service after The Network[tm] is up".
Yes, yes, we know systemD _claims_ this is trivial, with resolving the dependency tree and all that.
The reality is a bad joke and a sometimes twisty maze, along with pointing blame fingers, the usual excuses and "you're doing it wrong".
Linux isn't Unix. Wasn't designed to be like it
Nice link, and I have to say, I agree with what Torvalds says
"Software evolves. It isn't designed. The only question is how strictly you _control_ the evolution, and how open you are to external sources of mutations."I don't want to put words in the mouth of the OP, but I'd perhaps phrase if more carefully as: "Systemd doesn't follow the minimalist Unix design philosophy."
You can have a design philosophy while not having an actual design.
And like any philosophy, it is subject to much emotive argument ...
Can someone explain ...... what the issue is with systemd ?
yes of course. As soon as you explain to me why systemd is any good. You see, the point is, SysV init worked very well for most people. Then some people invented use-cases where they pretended that SysV init didn't work well, and that systemd solved THEIR problems. And after that EVERY-BODY should use systemd, even those who didn't have ANY problem with SysV init.
In my own little case, I dislike systemd because the times I've tried to figure it out, I found it disjointed and confusing with things scattered all over in unexpected places. It's a whole new realm of knowledge that we have to master, and some of us just don't see the point in expending the effort to gain mastery. Reluctance, not kool-aid. For most of us sysadmins, it doesn't really make anything any better, but it makes a whole lot of things newer. The knowledge we worked so hard learning, the locations of various configuration files we etched into our brains, the general "how to do things on Linux" that we gained, we're supposed to just throw all that out the window and start over. THAT is why we are resisting. Because systemd shows that nothing is sacred, nothing is reliable, and change is king. RedHat will bend each and every one of us over on a whim and roger us into submission, and there's jack-shit we can do about it but change careers or keep running increasingly ancient distros, while vainly hoping that one day, regular init will come back into vogue and we'll know what we're doing again.
I'm sure that a systemd apologist can pick apart my vague and emotionally driven diatribe, and show me for the old fool that I am. So be it.
First issue: complexity. The complexity may not bother you, but there are some consequences.
Compare https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sysvinit with https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=systemd .
Second issue: Scope creep. More and more is handled by systemd. Yes, systemd proponents say "You can turn a lot off at compile time, and even more at runtime. Thus you can choose freely how much feature creeping you want.", but that is basically denying that most people use Linux distributions, instead of compiling everything yourself.
Third issue: Systemd programmers do not adhere to standards. They actively state that they don't adhere to Posix, But other standards are also assumed optional. Resulting, for example, in https://www.theregister.com/2022/08/30/ubuntu_systemd_dns_update/
For me, the reason to go to Slackware was that, every time my NFS hanged, a reset of systemd was the solution.
My thoughts exactly,
I said for years, they'll attempt to replace the underpinnings 'spaghetti ball crud' that is Windows currently, with Linux, and just keep the Windows desktop GUI, so for users, it looks for all intents and purposes like "Windows", but fundamentally become a robust Linux based multi-user networked OS beneath.
That's the goal.
If that OS masquerading as Windows Desktop Cloud Edition, is run on Microsoft's own Azure Cloud servers, you're going to have a difficult job proving it's not based on an underlying robust, multi-user open source Linux based OS.
The whole idea will be that to the end user, for all intent and purposes, it still looks and operates like a Windows Desktop GUI, but the underlying core, is Linux.
Why do this? Microsoft's own support costs. They don't have a clue regarding the operation of some of the legacy code contained in the current patched up spaghetti ball crud that is Windows 10/11 and Office.
That's why they keep issuing patches, which keeps breaking things, which they issue another patch on a patch to attempt to mend things, which keeps breaking things and it seems to be getting a lot worse in the last 6 months.
No, they are trying to take over linux from the outside. Getting apps to only run on WSL and then trying to say that Windows is cheaper than RedHat etc.
It's your classic divide and conquer. All the more reason why RedHat should ditch systemd and produce something better.
"System D is a Line troop tradition. The men organize themselves into small units and go into a section of town where they all drink until they can't hold any more. Then they tell the saloon owners they can't pay. If any of them causes trouble, they wreck his place, with the others converging onto the troublesome bar while more units delay the guard."
-- Jerry Pournelle, _Falkenberg's Legion_
Sorry, that should be kill -9 $ME and I don't care what gets left open and hanging.
or in WSL: sudo systemctl stop $MYSERVICENAME and maybe it will stop it or maybe it will complain and leave it running.
Now if systemd could go a bit further into Windows it could improve the powershell horror