Shoggoth
+1 for the Lovecraft reference!
The MX Linux project has rolled out a new major release, based on Debian 12, and is on its way to becoming our favorite distro. Around this time last year, MX Linux was new to us, as we said when we looked at version 21.2. Now at version 23, its developers describe it as a middleweight distro. Don't be misled, though: while it …
They've created a user manual with a load of information, and forums which will be searchable, immediately placing it way above Devuan.
I'm sure Devuan ultimately works but its documentation is absolutely dire. 'Ask on IRC' rarely works well, and the fallbacks options are looking at Google or seeing if the documentation of Arch Linux (usually excellent) covers your situation.
"I'm sure Devuan ultimately works but its documentation is absolutely dire."
If by ultimately works you mean its release schedule lags behind Debian then I agree. It may mean your very latest H/W might not have drivers in the install image. Otherwise it just works as in the way Debian just works but better in not having the systemd obfuscation.
As for documentation Debian documentation will fit better than Arch and I've never heard of anyone calling the Debian Administrator's Handbook dire. A pre-systemd version will obviously be needed to cover sysvinit.
I should add that I ran the then next version of Devuan on a Pi Nextcloud server months before it was officially released and I currently have the Daedalus, the Bookworm based version running on a laptop and will probably get round to updating the main laptop shortly.
" ... If by ultimately works you mean its release schedule lags behind Debian ..."
So what does "lags behind " have to with it actually working? Hint: It has absolutely nothing to do with it. Not a damn thing
Based on that fact, how is your comment, or that part of it, relevant at all? Another hint: It's not.
So I guess my point is ..... WT=actual-F ?
You might have well said ..... " ... If by ultimately works you mean, if the sky is blue ... ".
< shakes head >
"If it is so bad why don't you develop/code a better version?"
If by "better version" you mean a better version of systemd then you have to remember that its problems lie as much in the concept as the implementation. A "better" version of systemd would still be systemd.
If by "better version" you mean a better version of init then there's no need. Better inits already exist.
What is really needed is a better version of FOSS politics than that which got systemd foisted on us.
"If it is so bad why don't you develop/code a better version?"
Why would any one want a "better" version of Systemd when the solution is to have an init system that is an init system and not a swiss army knife?
A "better version" of systemd is to strip it back to just being an init system or bin it and use one of the exiting init systems
Quality is a subjective metric.
I once worked on software which I thought to be utter garbage internally since the code was horrible, ill structured and completely unreadable. The customers, however, loved it since it did exactly what they wanted, was easy to use and fast.
I see systemd similarly. Developers may hate it, but Linux desktop users don't care since it solves problems they were having with Initd powered systems.
Your question is likely meant to be a witty rhetorical riposte, but I'll give you an answer, one which you probably already have seen. Variations on it have been posited on ElReg in the past, at least.
The short answer: marketing and politics.
Poettering sold his bill of goods to the Right People at Red Hat, possibly project manager types, probably marketing and sales. The pitch was likely something about making boot faster and more reliable, maybe including some tenuous tidbits about revitalizing the Linux Desktop (Lennart's own laptop was better than ever, eh?), and he may have had the Gnome people on board already by that point, tying together the fortunes and futures of both projects. It's no great secret: Red Hat is all-in on Gnome. But overall: here's this new cool feature thing that nobody else has yet! And it comes from us -- Red Hat!
However the speech(es) and presentation actually went -- there are some insider stories and versions of the history out there, fwiw -- the Right People at Red Hat were convinced, and the seeds were sown at that point.
Because, for better or worse, for the longest time Red Hat has pretty much dictated where Linux distributions go. Oh sure, Debian, Canonical, and SUSE all have their piece of the pie among the major distributions, but regardless of the quality of their OS, none of them hold as much sway over Linux direction as Red Hat.
Maybe that's changing. Red Hat has shown (like any big corporation) they aren't immune to foot-shooting, and they've alienated parts of the open source community over time with past and recent actions. How significant the impact will be is still an open question.
Today, that's your answer: "because Red Hat did it". And for the mainstream of Linux, that's sufficient for others to follow. For now....
This post has been deleted by its author
Is the systemd hate some higher level thing than what I use Linux for. Which is just my home desktop whose primary purpose is to play games, browse the web, and watch videos.
I think I had to look into a problem once. I kinda like how the commands were similar to net used in Windows. But limited exposure to it except reading it.
Classic sysvinit-style is poor because you cannot simply say "this daemon I'm installing requires these services to be available and provides the following service", instead you have to manually work out which daemons are needed and the order to start them. That works for servers-as-pets (you can take the time) and servers-as-herds (they're all identical), but not in between - and not for desktop/laptop.
So there was a need that systemd claimed it would resolve. And it kind of did, but...
A large part of the problem is that instead of being a "Figure out dependencies and start the daemons in the right order when triggered", it's taken over huge amounts of functionality from said daemons.
That is not the *nix way, and it's also certain to cause trouble - a bug in one essential service is now a bug in several of them etc.
The other, rather more significant problem is Poettering.
"I kinda like how the commands were similar to net used in Windows."
That's an interesting observation. From my perspective, as someone who used MSDOS from around version 2, and CP/M previously and stayed with Windows thriough to about 98, and while using WinXP and 7, am primarily a FreeBSD users since 4.3R, I've been quite impressed how Windows at the command line and boot process has moved much more in the direction of a Unix-like OS under the hood.
... Devuan was really painless to install. I had some problems with the current Debian though: I want to set my language to English, make lots of things much simpler. However, the graphical installer then insisted on not offering me the correct time zone nor the correct keyboard. Which is stupid.
You should've mentioned MX's frugal/persistence install mode. This allows to install MX on any (and I mean ANY) PC with, say, 8GB of free disk space. I am not talking about shuffling partitions or creating a new partition where MX would be installed: no, MX can peacefully and fully-featured co-exist with a Windows7...11 install on the very same NTFS partition, given enough free space.
This has allowed me, back in the days, to test drive then-MX17 in parallel with my aging Win7 install for a few months and to gradually switch from one OS to the other... Even now, six or so years on, I still have MX (now 23) running as frugal as it's incredibly easy to backup (just copy three or four files) and a breeze to install on new machines, even without the snapshot/ISO route.
The beer goes to the devs.
Intrigued to see the number of Devuan-related posts here. Seems the Reg forums are the only independent place on the planet that admits to the existence of us Devuantes.
I am intrigued by the "improved" Nala - how is it better than Synaptic?
But "its additional polish compared to Devuan could win you over" reads to me as "its additional polish compared to Devuan could fsck you over." Debian/Devuan never sold on polish, and that's the way I like 'em.
I am all in favour of distros free from systemd and might take MX Linux for a spin in Virtualbox just to see what it can do.
I don't think that I will be installing it permanently though.
Because MX Linux is derived from Debian I assume it has inherited sudo as well. Now it might be the old sysadmin in me but I never liked the idea of having a user getting admin privileges on a computer. It strikes me as asking for trouble if you are the got-to-guy for support with family and friends.
I think that Texstar and the folks at PCLinuxOS have the right idea and stick to the UNIX way of doing things and only allowing root have access to the sensitive parts of the system.
Now I have, in the past, been down voted for voicing concerns about sudo but cases differ and if you're happy with sudo then use it.
Part of the joy of Linux is that your boxes are your boxes and no-one can tell you what is right and what is wrong.
MX Linux does look good though.
You are aware that sudo was written by admins. just like you.
It was invented to allow admins to give more granular access to some facilities in a more flexible way than UNIX groups on a UNIX-like system without giving full superuser access.
If you've ever run a UNIX-like OS in a larger environment, especially one running production services, you will be glad that you can give facilities such as creating user accounts, managing filesystem mounting, starting and stopping application software etc. to lower grade administrators and operators without handing out the crown-jewels to people who could be dangerous with too much power. Remember some UNIX systems in the past have been mainframe-class systems, and have had hundreds of users and run many applications on the same system. They're not all run as workstations.
Yes, it can be laborious to configure the sudoers file, but there are tricks you can do (like setting up rules to allow members of various groups access to different sets of commands and integrating with LDAP or Kerberos) to allow you to almost achieve RBAC type functionality.
I agree that the standard sudoers file which most Linuxes ship, which effectively only sets up administrator and non-administrator accounts is too blunt, but used properly it can be a very effective tool, especially for systems that are managed by many people. But even using it as an Admin, with full access, using sudo allows me to protect myself by running as a normal user, and just jumping over the privilege hurdle only when I need it. Admins who work as root all the time are asking for finger trouble to kill their systems!
I wonder just how many people badmouthing sudo have never really investigated just how it can be controlleld to make it more useful, and just taken the out-of-the-box configuration at face value. I really wouldn't want to go back to an environment where the only way of administering systems was by logging on or su-ing as root, or setting suid bits all over the place (before sudo became widely adopted, I used a similar system called "Op", and I also know that there have been other very similar solutions, both freeware and commercial that do similar things, so it's a problem that has generated tools like sudo more than once).
It beats stupid Vista and later UAC
Win9x programmers broke the NT security model which was fine before Windows 2000, Win9x programs supposedly XP compatible created a load of security grief. I think Debian / Mint etc overtook Windows about 10+ years ago in usability and stability. It's only half a dozen companies only producing for Windows that blocks Mac and Linux from average corporations and small businesses for clerical, stock, ordering, payroll and accounts.
"It [sudo]was invented to allow admins to give more granular access to some facilities in a more flexible way than UNIX groups on a UNIX-like system without giving full superuser access.
If you've ever run a UNIX-like OS in a larger environment..."
This entire issue seems to be the usual ongoing battle of convenience vs security.
I've run Unix from V7 onwards in production environments.
1. Back then there were several administrative accounts for different functions. For instance help-desk users could be given access to the lpadmin account to let them sort out printer woes. This was a sensible way to implement such granular access. The helpdesk user could su from their ordinary account to lpadmin with the lpadmin password, sort out the printer but with no extra privileges and then CTRL-D back to their ordinary account.
2. This seems to have been looked on as inconvenient and the other UIDs dropped out of use and everyone used root but at least it needed an extra password to gain access.
3. Mass use of root was then looked on as insecure so sudo was invented. The supposedly granular extra facilities were accessed by giving the user's ordinary password again.
4. Of course maintaining all these granularities in the sudoers file is inconvenient so we now have Ubuntu and its friends - including MX - having a standard user gain full root access just using their own password. 2FA it isn't
We've gone back to stage 2 but without the safeguard of a second password. Yes, I follow the arguments as to why sudo was invented but the reality is that it's being used in a way that is counterproductive to what it set out to achieve. In this round of convenience vs security convenience has won.
I call youe Edition 7 (I know, I used the term Version 7 for years, before I realized that the correct term was using the edition of the manual as the reference), and raise you Edition 6!
I remember different functional users for different functions, and I also remember /etc/group with group passwords, a feature that still nominally exists, but fell out of favour when secondary group sets were adopted from BSD (yes, in Edition 7 - and even up to SVR3) originally you were only a member of one group at a time, and could switch between the groups with the newgrp command, with access conditioned by the group password (not ever wondered why the /etc/group file entries have a password field?). I also remember back then that these systems were reasonably small (my Edition 6 systems had about 12 terminals, and a user base of a couple of hundred, so administration was largely a one-man band).
I'm not saying sudo is perfect. The system that I mentioned called op (or op.access) allowed you to put in some argument checking if you wanted it, something you have some control of in sudo, but it's crude, mainly checking that arguments are present or missing. But Hey! something is better than nothing (which is what you get when using a superuser shell).
Also, you talk about 2FA? You want 2FA, configure your plugin authentication modules for it, sudo can use it as well (although I think you're really talking about is two separate user authentications, yours and root's, which is not really 2FA, just two passwords).
The way Ubuntu and other distro's that configure sudo out-of-the-box is familiar to the way that Windows works without domains set up, so is useful for people transitioning to Linux from home systems. Once they know better, they may opt to change, but many won't. But basically, if you don't like sudo, very little has changed, so set a root password and empty the sudoers file, and you have what you want.
"to allow you to almost achieve RBAC type functionality."
Ahhh. This is one of the reasons that I love Solaris. It does RBAC natively . . .
And, yes, there are some /*way*/ serious production systems that run on Solaris . . . and, of course, Oracle runs all of its TPC benchmarks on Solaris ;->
But then, along with RBAC comes the problem of coding it and people learning how to use it. And for what it's intended to do, SELinux in some ways is a good substitute . . . Different model, but functional. But then, with NIST going to ABAC, SELinux fits very nicely there . . .
If you want fine-grained control the better way would be to revert to the original concept: lpadmin owns the printers, bin owns the contents of /bin (it you still have it), /usr/bin and so on. You then have the granularity, you don't have to hand out the root password to those who don't need it and you don't end up with the situation where you might demand 2FA for users to access the services and 1FA for access to control those services.
The problem with that approach is that if you need to revoke someone's lpadmin access, you need to change the lpadmin password and distribute it to all the current users that still require the access.
With sudo, you just remove that user from the group allowed to sudo as lpadmin.
I'm sure there are plenty of cases where su is preferable, but in my (admittedly limited) experience, sudo was more flexible.
"Because MX Linux is derived from Debian I assume it has inherited sudo as well."
Debian and Devuan do indeed have sudo. However you can take it or leave it because the installation sets up a root account which Ubuntu and friends don't and it doesn't, by default, set up users in the sudoers file which Ubuntu and friends do. Out of the box the old way is the way to so things. Any time I have to set up a Ubuntu derived OS one of the first things to do is set up a root password. It also expects the likes of the KDE partition manager to present the root password.
I tried in install on my little nettop.device. It had a small but adequate unused logical volume group with Mint installed in regular partitions. Debian seems no longer able to install into logical volumes - would this? Yes it could see the logical volumes but needed a slightly larger root volume than I'd set up. Not a problem, there was space left and as I was using the KDE live ISO the KDE partition manager was available and could make the change without dropping to the command line. Not good. It needed the demo user password instead of roots. However installation did offer to set up an optional root password so maybe it would then default to Debian style. No such luck - the installed version still follows the Ubuntu approach.
Installation, BTW, starts copying the files in parallel with entering the user data. It doesn't however, set up the network connect, at least it doesn't for wireless, so it doesn't copy updated packages from the repository, leaving an update required at the end of the install. Of course when it gets to about 96% complete it has to do a grub install and grub probing takes up the other 90% of install time followed by rebuilding initrd which takes the final 90%. The subsequent update built a couple of kernel modules. I can't remember ever encountering this on Debian. It's painfully slow on this little box and if it were a regular feature of updates would be a bit of a pain - probably not as bad as Windows updates but better avoided.
Although it's Debian derived it seems to work more in a Mint sort of way. In fact the kernel announces itself as a Mint kernel in the logs. It'll take a while before I decide whether to use it to replace Mint on the nettop. I don't think it will replace Devuan on the bigger laptops.
" ... the idea of having a user getting admin privileges on a computer ... "
Well I can see your point. I just have a question. Then just how as the ONLY user, do I do the things that sudo allows me to do?
Log out of my user account ( a hassle ) and log in as the admin ( more hassle ) and get access to serious shit that I REALLY shouldn't have access to then log out ( again ), log back in ( again ) so that I can now do what I need to do?
Okay.
No.
Just fuck no.
> Log out of my user account ( a hassle ) and log in as the admin ( more hassle ) ... {some shit} .... then log out ( again ), log back in ( again )
When I was a boy, at the terminal (often paper): type login, root, pass, see special root prompt, do dirty deeds, type logout, I am myself again.
But but but---- the GUI is more convenient!! (than typing a dozen characters?
And, also, it's generally regarded in real production environments that you don't allow direct root login except on the console of the system.
This is not a problem when the console is on the system right in front of you, but with production UNIX systems, the console is normally next to the system in your secure machine room. This makes it useful in emergencies, and does not expose your root password across terminal serial lines or networks (Oh, and turn off remote access to your KVM switch - I've come across systems where they apply the rule of no remote login for root, only for the KVM to remain accessible, sometimes without an additional password, across the network, and sometimes via telnet!)
I like the “antiX Magic” desktop for a minimalist desktop experiece. Is it possible to install that on MX Linux?
No offense intended, but this doesn’t offer “me” anything worthwhile.
My computer doesn’t run an operating system anymore, now it runs a Proxmox hypervisor that hosts a whole bunch of VMs. It’s each individual VM that boots into the resource-sucking OS, and as far as I am concerned those OS’s are there strictly to provide function libraries that enable the various applications to run.
(Because the point of the computer is to run the applications.)
I’m tired of operating systems, they keep getting changed around which is disturbing the orderly operation of the purposeful applications. My hardware dates back as far as 2013 and serves ALL of my purposes already.
Same as my kitchen table that I also bought back in 2013. I’m not letting some furniture enthusiast come into my home and start making changes I don’t want or need. I use my kitchen table each and every day. The thing already works the way it is. Done.
"Because the point of the computer is to run the applications...I’m tired of operating systems."
Your collection of running applications - I take it you have several there at any one time - is a collection of chunks of code needing access to resources such as CPU, memory and peripherals. How do they get that access? It it was just one application, yes it could run on top of bare metal but how would you then get a second to run when the first has direct control? And a third? And a fourth?
The job of an OS is to share those resources out so that each application gets its share, keeps applications from accessing each other's resources while letting them communicate with each other as necessary. It passes control from one chunk of code to another so that although there's a continuous thread of execution (or several nowadays) the user is presented with the illusion of separate applications all with access to their hardware.
What you describe is a complex multi-layered OS starting with, but very definitely including, your hypervisor, the VMs and the OSes inside the VMs. There may be a good reason for that hypervisor but you do have a much more complicated setup than a single OS running on bare metal. It's not surprising you're tired of OSes and that they and their VMs use a lot of resources for themselves to do the job they've been set. If indeed there's a good reason for that setup then that's the price that has to be paid to meet the requirement; if there isn't then it's the price for an unnecessarily complex arrangement.
Nobody's asking you to change your hardware or what runs on it*. But a lot of people have bought a lot of hardware in the last 10 years** and they've all had to install an appropriate OS on it and will continue to do so. Articles and discussions such as this are part of deciding just what is appropriate and reviewing if the original decision remains appropriate. In fact you must have had some process for deciding on what was appropriate for your requirements even if you don't feel a need for reviewing it.
* Nobody was even asking you to read the article if it had no interest in it.
** Just as a lot of people will have set up home in the last 10 years and needed to buy a kitchen table.
> My computer doesn’t run an operating system anymore, now it runs a Proxmox hypervisor
You are aware, are you not, that a hypervisor is an operating system?
The user applications that a hypervisor runs are VMs running guest operating systems, rather than games or office apps.
> My hardware dates back as far as 2013 and serves ALL of my purposes already.
>
> Same as my kitchen table that I also bought back in 2013. I’m not letting some furniture enthusiast come into my home and start making changes I don’t want or need. I use my kitchen table each and every day. The thing already works the way it is.
> Done.
1. Needs change. What if you find your kitchen table needs to be bigger because you're having more people over? Then you need more kitchen chairs, and a bigger table. What if one of the chair legs breaks, and upon inspection you see that the rest are going along the same path? Something's gotta change ...
2. Hardware dies. At that point, either change or do without. It's like someone who was complaining their old laptop with 4 gb of ram was too slow. People offered to put in an SSD, but that doesn't change the "not enough ram" (it was running Windows S), and the crappy CPU. Cheaper to just call it a day and buy a $100 tablet. If you're going to have to change hardware anyway, might as well take advantage of the opportunities for a meaningful series of upgrades.
3. Your needs might not change, but your wants might. Your current hardware might not be able to do 4k, and you're okay with that - until you see someone with a big-ass 4k screen (or two). "What is seen cannot be unseen."
4. Your needs might change. You might want to get into WFH (Work From Home), which goes better with a couple of big-ass 4k screens (same as multi-monitor setups save business time and money by making their workers more efficient). Or your screen goes in the crapper - why not take the opportunity to do an upgrade?
5. YOU might change. Your sight goes in the crapper, a couple of those big-ass 4k screens can change which side of the digital divide you end up on.
Some of these are just possibilities; some (like #2) are certainties. And #5 will hit most of the population as they get older.
I've tried to use Devuan a few times, but where I stumble is the installation of packages that do require systemd. PHP, for example, requires it. Why the fuck does PHP require systemd? Why? It's always at that point I stop, put my tail between my legs and have to put up with another distro and systemd's bullshit.
I've got a new laptop to play with that will be my daily driver at home (compared to the work laptop that I use), and this MX Linux will be the first one on there. If it provides everything I need - and by that, just a laptop I can develop PHP applications with and do some video and audio editing, then it'll be all I ever use.
I've gone past my anger for systemd. I don't even loathe it. I look at it like I look at the british political system. Everything could be so much better if a particular group of people weren't involved at all.
Yes, I've said it. Systemd is a very Tory thing to have. You know I'm right.
systemd disapproves of 20mph speed limits!
systemd doesn't like foreigners!
systemd fasttracks applications for government contracts to its friends!
systemd builds prison barges that are fire risks! (because "Great Expectations" is not a novel, it's a strategy document)
Actually now I read that, I think your comparison is a bit harsh. On systemd.
If systemd were just an init system, it would not be so reviled. It is the fact that systemd has taken over so much more that it is avoided by so many. It has not only taken over various means of functionality, but also ways of doing things. It is doing things in a more "MS Windows-like" way both in a technical sense and conceptually. It has managed to become a dependency in more than one area of functionality.
A Linux OS can be 100% systemd free. A Linux OS can be full on systemd. There is a third wide ranging systemd variety of Linux OS, less than full systemd in order to deal with dependencies. Because MX Linux does not use systemd init, it falls into the third category, barely. MX Linux is a systemd Linux OS on all other counts, and that matters more than systemd just not being the active init.
I do not think there is any Linux OS which cannot have systemd added 100% with relative ease. On the other hand taking a systemd Linux and removing systemd ranges from not to hard to nearly impossible. The systemd cancer is beginning to take hold even in Windows through WSL. There are efforts to emulate systemd through compatibility deamons on BSDs to allow some Linux origin software to run.
It is the dependencies on systemd which are making matters worse. I have been able to overcome some of the supposed dependencies by manually installing a "service". But other dependencies are far greater and would require a rip and replace of a major part of a package and possibly easier to rewrite the whole package. Software dependency on systemd has to be considered part of the cancer.
There should be an organized effort to make software and Linux "systemd not required"
As a non-user, I'm always astonished by the level of incomprehensible jargon in articles about Linux. Imagine if Apple or Microsoft tried to sell their software like this.
I would love to use a free, open-source operating system, but can't see how Linux is ever going to appeal to a wider audience than masochistic nerds. I have decades of programming experience in dozens of languages, and even used Unix at university. I should be an ideal target, but every time I decide to dabble with a new, supposedly user-friendly flavour of Linux, it's never long before it starts throwing obscure jargon at me about partition types or whatever,and I quickly decide life's too short. Now imagine what it must be like for a typical home user.
Decades of experience programming in dozens of languages, and obscure jargon confuses you? Our entire industry is built on obscure jargon! Imagine if you'd thrown your hands up in the air complaining about jargon the first time someone mentioned a hashtable, or the travelling salesman problem, you'd have had a very different career.
Literally the only time you will see partition types mentioned is during installation - click OK, accept the defaults, move on.
Here are just some of the names and terms used in this one article: Mx, Mint, Debian, Devuan, systemmd, systemmd-shim, siduction, Xfce, KDE, Fluxbox, MATE, Cinnamon, Whisker Menus, tint2 desktop panels, Crunchbang, BunsenLabs, Nala package managers, Snap, Flatpak, Ubuntu, Zinc, deb-get, Liquoriz kernels, antiX, Orca, Pipewire audio servers, Wireplumber, UFW firewalls, polkit, AHS.
Good luck asking Joe Public to navigate that mess.
Joe Public will just accept the defaults. And probably stick with Mint or Ubuntu, as these are the most mentioned distributions.
Your average non-technical user of computers (like my Wife) really don't care about what is under the covers, as long as it does what they want.
The people who come across the jargon you've listed are, by definition because of what they are reading, at least moderately capable of understanding such jargon because they're reading The Register, and have Google* to help them with things they don't recognise.
(* other search engines are available)
Hear Hear,
every six months or so I download and install some variation of Linux, each time I sadly rebuild back to windows. My favourite programs will not run on Linux and once I get away from the linux GUI opening screen it is gobbledegook....
Yes, I am using Office 2010, Jiosoft Money Manager, Excel 2010, Brave Browser, Visio 2007, Dymo label V.8, Picasa, and...they all work! I am using a homebuilt i9 with 64gb of ram and 18tb of hd storage over five drives.
I design things and monitor the finance market so I do not want outsiders to see my work. My email machine is air gapped to my other networks which are air gapped to my storage. So "cloud" computing will never be of any interest to me.
I dare microsoft to let windows 7 to be open source.....bye bye windows, Mac and linux if that ever happens.......
If you are tied into programs that only run on Windows, then Linux will never satisfy you (and in some cases, neither will MacOS). The only solution would be the complexity of running Windows in a VM.
I'm not going to try to suggest Open Source alternatives for your packages, because there aren't any in some cases. But that is not the fault of Linux, it's the fault of the software writers who don't (or won't) consider Linux as a platform to write for. And if you really need those packages, you have no alternative.
I'm curious. If you're so hardware savvy as your estate suggests, why install..Linux and then rebuild back to Windows. Surely you can find another system, and just install and junk Linux distros without affecting your main system? Seems like you're not being completely honest, if you ask me.
Pathetic not a torrent link in sight, just a Scumforge never will be downloaded page link. Due to all the garbage they run on them that will never be run on my computer. There we are the slimeballs at Google can manage a link, apparently the author is too lazy or incompetent to provide a the scumforge link page.
https://mxlinux.org/download-links/
Adding <ipaddress> <servername> to /etc/hosts makes the remote samba shares work.
I've seen this occasionally on Mint too, but never found which network thing was missing/broken.
The MX Linux seems to boot about twice as fast and seems faster to use. No problem adding my themes from my Mint + Mate install as well as fonts and .Xcompose
Systemd is very problematic for security, one of the many many issues with linux being so far behind Windows (I know right, who woulda thought) and macOS.
"Linux being secure is a common misconception in the security and privacy realm. Linux is thought to be secure primarily because of its source model, popular usage in servers, small userbase and confusion about its security features."
This article goes on to talk about specifically why linux is so abysmal with security. Yes, it doesn't have invasive telemetry, but security is not the same as privacy, and true privacy is unachievable without robust security, which unfortunately Linux does not have.
Here's the link: https://madaidans-insecurities.github.io/linux.html
Or find it by searching for:
madaidans insecurities linux
Here's what it covers:
1. Sandboxing
2. Exploit Mitigations
3. Kernel
4. The Nonexistent Boundary of Root
5. Examples
6. Distribution-specific Issues
7. Manual Hardening
8. Other Security Researchers' Views on Linux
Having been a huge linux enthusiast since 2006 and using linux as my primary OS for the last decade, I was shocked when I was first made aware of the lack of security that linux employs (even worse, the complete disregard for even basic security practices in development by top contributors, maintainers and devs).
I've been using GrapheneOS as my smart phone OS, and reading their site and learning more about security has me reeling about linux. I definitely don't want to go back to Windows or macOS! But knowing they are both significantly and substantially more secure then linux is heartbreaking.
I'm looking at trying out Qubes OS (which is described as 'not a Linux distro.' But it feels like starting over, I also want to help bring awareness of security to our beautiful and diverse linux community.
I filed a bug report a week ago about the lack of support for Intel's A770 cards (I have a pair of Arc 770LEs hooked up to 6 4k screens - it makes working with low vision easy, and MSFS in either 8k or 16k is awesome).
On the afternoon of POETS day, they emailed me with a possible solution. I tried it, and it works for one card (pick the one you want at boot time). One less barrier to going back to linux.
I should have posted sooner, but there are so many trolls on so many forums that I just don't bother any more - but I created an account here because MX Linux worked when Devuan failed, and last year's attempts to install on this machine with NVIdia cards did not go well at all (insisted on installing the Nouveau drivers, which are old enough to vote but not old enough to work).
[Author here]
> I have a pair of Arc 770LEs hooked up to 6 4k screens
My word.
> MSFS in either 8k or 16k is awesome
What is "MSFS"?
> last year's attempts to install on this machine with NVIdia cards did not go well at all
Aha. No, been there and done that.
> insisted on installing the Nouveau drivers
They are built in to most distros. They work well enough for 2D desktop stuff, in my experience.
In my previous role, at a Linux vendor, my desktop machine had an nVidia Fire card. I had 2 screens. I added a 2nd Fire card for a 3rd screen, and all hell broke loose.
Despite the same branding, different nVidia Fire cards have GPUs from different families and one given nVidia proprietary driver can only drive one GPU family. Mismatched GPU families are not permitted. The same applies on Windows, BTW.
I fought with it for _over a year_ until IT offered me a 4-port ATI card.
Then it just worked, first time every time, in Windows as well as in Linux. AMD's ATI drivers are FOSS and are included, and one high-end card means just one driver and no clashes.
My advice: consider AMD cards. If you can, have a single card, but if not, two with the same GPU model.
> What is "MSFS"?
Try the first result from ddg, google, or bing
It's a flight simulator that is noted for pushing hardware to its limits, more so than the typical dungeon crawler/quest/whatever. Especially at 4k and higher resolutions.
NVidia is pretty much irrelevant going forward, for two reasons:
1. They are getting out of the consumer GPU business and going all-in on selling compute units in bulk for AI clusters;
2. Unlike AMD and Intel, they can't integrate their chips with "their own brand" of CPU like AMD and Intel are doing, because NVidia doesn't make CPUs.
AMD (and NVidia) are still using technology that they conceived more than a decade ago, using initial buffers of 960x540 to do an initial frame render, then upscaling that to the desired resolution. This involves repeated calls to the CPU for more data, creating bottlenecks.
Intel's new graphics chips start their rendering by laying out the initial buffer size as the final buffer size. You want 4k, you get an initial 4k image buffer. This is why I can render MSFS on 4 x 4k 50" screens and 2 x 65" 4k screens, and my piddly i5-12400 stays at body temperature or below. It just doesn't have much load.
Being low vision, those 6 screens make it possible for me to work (and play) without having to bother with stupid screen magnifiers. After trying them, I decided the proper solution was, instead of making a small portion of the screen bigger and scrolling all over the place, just to make the screen 5-6x bigger, and upscale it by 250%-350%. So now I can work like a human being again.
Started off with 2 50" screens, then 3, then added a 4th when it went on sale (and added multi-user capability on the 4 screens on one computer - duplicate mice, keyboards, game controllers, flight yokes and rudders, 3 webcams (left user, both users, right user), 2 headsets.
Then added 2 x 65" screens, which are now my center screens, and make it easy to continue working despite more vision degradation.
When it was time to change from the chip-shortage overprices and underspecced NVidia cards that I bought when I started putting this system together in the spring of 2022, the Arcs had the best specs. A combined 32 gb of ddr6 ram, combined 512 bit bus, and less combined power consumption than a single NVidia 4090.
This is an experimental prototype for exploring out-of-the-box-thinking solutions for people with low vision, because the places that are supposed to help us are stuck in solutions from the previous century. Everything is now complete and stable, so in September I'm going to contact medical caregivers (ophthalmologists, senior's organizations, health care workers, etc. - but NOT organizations that provide tertiary care to people with low vision. When I approached one last year to share my interim results, I got ghosted. So I ghosted them for this spring's scheduled visit. It's time to adapt to modern tech.
Anyway, back on-topic, I was impressed with how they got back with a solution in less than a week - one that works well enough for anyone who isn't crazy enough to be running 2 Arc 770LEs. Far better than the RTFM from other distros, or dumb chatbots from commercial services and suppliers.
So I'm really impressed with MX-Linux. And once MSFS support XeSS (Intel's inter-frame rendering engine to create additional frames to speed up frame rates), I'll be ready to be even more impressed. After all, with NVidia dropping out, Microsoft/Asobo are going to have to support XeSS in addition to AMD's DLSS3 frame interpolation.
After all, even the best hardware in the world is useless without support. And unless you're into time-limited hardware like chromebooks, a distro like MX-Linux is where I'll be heading when I build my next linux box. Not some systemd-polluted crap.
I started looking for alternative Linux distributions when Ubuntu went all-in on their crappy Snap applications. Nice concept on paper, but not in the computer. I had a production computer that I left Snaps in, although I don't use any Snap applications, and after under two years there were 31 loop mount points for Snap applications.
I was using both Debian 11 and MXLinux 21 because neither could do all the things I wanted that I did in Ubuntu. So far, I haven't found anything I do that I can't do with MXLinux 23. They fixed a minor issue with user and group numbering in MXLinux 21, The MX Tools is a handy set of tools that makes administering a system easy. The MX User Manual (almost 200 pages) on the desktop goes into much detail on the structure, use, and configuration of MXLinux (applicable to other distributions to a large degree).
The only trouble I encountered was setting up Samba shares. As in MXLinux 21, I added two lines in the Global section to force the user to the main user/administrator of the system, ie:
force user = myadmin
force group = myadmin
and then making sure the shared folder has ownership of 'myadmin:myadmin' (this version of Linux uses that syntax instead of the more familiar (to me) of 'myadmin.myadmin'.
The share still didn't work, although a Windows computer could tell the share was present. I had the go into the firewall with 'gufw' and set a predifined rule, SAMBA, to open the Samba ports to ipv4 and ipv6 traffic.
I spent less time installing MXLinux 23 and trouble-shooting why the shares didn't work than I ever spent on installing and setting up a Windows server.