Re: They are none of those...
I find myself wondering how such wildly divergent sets of experiences and expectations with the same OSes develop.
In the world I have worked in for 20+ years, there's been no such distinction as the one being made here between daemons and processes . A "daemon" is loosely used to describe any process that doesn't end when its controlling terminal ends. In my experience, it's not a term reserved for init-managed processes.
In my world, only daemons expected to start with the OS are put in the init system. Not all long-running, daemonic applications are allowed to start just because the OS started, as this could cause serious problems depending on the application and server in question. For direct start/stop control, these applications have control scripts that nohup or other-wise double-fork them into the background specifically so they will not end with the controlling terminal. They aren't run as root or as a "plain" user, but as an application-specific, generic account, usually using sudo or something similar. If you don't disconnect the process from the terminal of someone who started it, it will die, so every control script or program does this.
I work in IT for an extremely large multinational, and to my knowledge no one there manages long-running Unix / Linux applications any other way than I describe above. If they start with the server, they go in an init config. Otherwise, they are, at some level, controlled by programs doing the equivalent of nohup specifically so they can be started "by hand". This is considered completely normal and not in any way onerous or dangerous, assuming privilege to run the control scripts is managed correctly (which is itself not very onerous, though not everyone does it as they should).
Separately, and on my specific team, we have scheduled jobs that sometimes fail in ways that result in a need to re-run them manually. They are not long-running daemons in the sense of a web or database server, but they do take 4-6 hours to run to completion. We're not going to stick batch jobs in the init config, and we're not going to stay logged in for that long babysitting the process. Even if we have to manually check on its success when it's done, we don't want it to terminate mid-stream because our shell timed out or VPN dropped. We nohup the job and expect it to stay running.
I don't know why anyone would think it's hard to control processes started this way. All you need is a PID and maybe sudo access to a control script that can use it. Except for some edge cases, it's pretty easy to either store the PID when you launch the process, or you can go find it after the fact with 'ps'. Sure, an init system that tracks the PID is cleaner and takes that bookkeeping off your hands, and would probably deal with those edge cases, but is that really considered that valuable in general?
So the idea that this systemd behavior would be normal and expected for a Unix-like system is just incredibly strange to me. It's normal for Windows, not Unix/Linux. Based on the IT world I've been in, it feels like a solution to a problem no one ever mentioned.