For a second....
I misread the first line as "The fall of systemd is here" but alas, not today it seems...
The fall version of systemd is here, with support for increased boot security, including tightened full-disk encryption. The 113th version has the usual long feature list of very specific, targeted elements outlined in the release announcement. However, as one might expect following recent events, several of the headline …
As another commentator regarding Linux noted, it is amazing how much systemd is "hated" yet how prevalent it is in the Linux distro world.
So the hatred seems to be on the users because the devs of the distros seem to love the thing.
Therefore it is reasonable to believe that systemd solves more problems than it brings to the distro builders, but conversely if systemd is that "bad", or that hated by the user community, why don't the devs listen to the users instead?? Apparently there is a level of disconnect between user's desires on an OS that promises user choice, versus what you are actually given - and one must wonder why.
"People hate changes unless they are the ones making the change."
Nah. Change is GOOD!
However, change for change's sake (or to put loot in the pockets of the shareholders) with absolutely no benefit to the userbase ... and in fact, negative impact on the userbase (being larger, it has more bugs, is less secure, takes longer to patch, etc. etc.) is BAD.
What I dislike about systemd is that it gets it tendrils even into stuff that is separate from any "init process" stuff. It now controls everything, network, even sound (and don't get me started on pulseaudio...).
OK, and that I dislike change for the sake of it (yes, I get some of the motivation behing systemd, but seriously, telling me that simple human readable scripts are less easy to maintain and use than this - pull one of the others, this one has bells on it)
Quite. And I think that camel-nose-tent pretext is another thing which people find so distasteful about systemd.
I.e. it feels like subterfuge, like poettering and his Red Hat (at the time) enablers pulled a fast one on Linux folks, inflicting something unneeded and un-requested on (nearly) everyone, without discussion or recourse.
Lack of recourse since then has only gotten worse.
Now, I know (having read just a little of the public history) that there was some discussion about systemd. Some. Systemd didn't just show up in RHEL overnight, unannounced.
But even those early posts (e.g. on poettering's blog) came off as fait accompli, especially in retrospect. If he hadn't worked for Red Hat all of this might never have happened, but obviously he had backers and influence at the corporation, enough to sell his bill of goods to people with say-so there, and since Red Hat basically controls most of where mainstream Linux eventually goes, here we are.
It's a social disease.
It's symptoms include an arrogant disdain for anyone that isn't a developer, a gatekeeping process to drive away developers that don't fit in with their clique, and lack of respect for the foundations of both the Linux and Unix communities. This toxic culture spreads like their code, sowing the seeds of discord deep into the organizations they inhabit.
That's a bit drastic.
I put together a thin client thing on Rock 4C+ SBCs running armbian.
It uses sway as the compositor / display server, wlfreerdp, and a modified build of wvkbd for an on-screen touch keyboard. This is because Xorg performance is terrible on these rockchip boards but Wayland performance is good.
It works really well, but things have to be started up in a certain order. FreeRDP can fail for various reasons (network, user lockout, etc), and wvkbd dies if the monitor is disconnected or turned off.
My first attempt used a bash script and wait [multiple pids] etc, figure out which process died, restart it, etc. It became an unmanageable mess.
I spent some time googling, and despite still being a newbie with systemd, I now have a perfect system where the dependencies will start and restart themselves in the correct order, as well as the sway@tty1 service running as the display manager, auto login etc
So you've somehow managed to get a kludge filled with dodgy Beta code, running on a dodgy chinesium processor, kind of running, sort of, if you squint ... and you are using this personal project to justify inflicting the systemd-cancer on the entire planet?
Perhaps if you used the correct tools for the job in the first place you wouldn't think that constantly having to struggle is normal.
"My first attempt used a bash script and wait [multiple pids] etc, figure out which process died, restart it, etc. It became an unmanageable mess."
It's a known hard problem, but init systems have been around for ages without getting all twisted up. The fact that more than one init system exists shows (a) there's more than one way to solve the problem, and (b) nobody has come up with a perfect solution yet.
"I spent some time googling, and despite still being a newbie with systemd, I now have a perfect system where the dependencies will start and restart themselves in the correct order, as well as the sway@tty1 service running as the display manager, auto login etc"
If only systemd had stuck to solving that one problem...
The systemd-cancer itself is clearly a cancer ... Consider: it takes root in its host, eats massive quantities of resources as it grows, spreads unchecked into areas unrelated to the initial infection, and refuses to die unless physically removed from the system, all the while doing absolutely nothing of benefit to the host.
And now it has metastasized and is attacking the boot process. It's time to cut it out of the Linux system completely, lest we allow it to destroy its own host (which I suspect was the intent all along ... Poettering & Sievers need revenge (it's an ego thing) for getting thrown out of the Kernel group for not playing nice with others.)
"the devs of the distros seem to love the thing"
I don't think so. Consider that the devs at RedHat had no say in the matter, it was forced on them by management ... and the senior devs at Debian quit and formed Devuan rather than use it. Almost all of the other distros that use it use the RedHat and Debian repos, mostly out of ignorance and/or apathy, with a pinch of sheer laziness for spice. They certainly didn't spend any time thinking about the ramifications, beyond "I use that software repository, so I must comply" ...
"why don't the devs listen to the users instead?"
RedHat doesnlt care, they are trying to out-Windows Redmond. The devs have no say in the matter.
The Debian devs voted with their feet.
Wow, amazed at the downvotes for Steve.
SystemD breaks the UNIX philosophy and tries to do too many things at once. It works well though for personal computers (not servers). And its developer, I fear, has (or at least, will) become a Microsoft stooge, and I fully expect Microsoft to try to poison the open source community as much as it possibly can. Embrace, Extend, Extinguish, etc. :(
But we should remember that it is the enemy who spreads division and hate. To hell with all that.
Debian is still great. So is Devuan, but Devuan is after all, ~95% Debian. It would be 5% of nothing without Debian, and I have a huge amount of respect for the Debian devs who have maintained the upstream packages for, almost-literally: everything.
I am writing this post on a laptop, and of course that runs Debian (and SystemD). And it runs pretty well. If it were a server, though, it would be running Devuan.
Why do you feel that MS will use it's old EEE tricks today?
Microsoft has no real interest in desktop OSs - the mess that Windows 11 is being the obvious indicator of that. Microsoft today is all about cloud-hosted and browser-agnostic. I'm currently using Dynamics 365 in Safari on my MacBook Pro and it's just as good as Edge on a Windows machine.
There are plenty of reasons to dislike systemd; attributing old Gales / Ballmer era behaviors of Microsoft is hardly one of them.
> Why do you feel that MS will use it's old EEE tricks today?
These days, Microsoft is into Surveillance Capitalism just as much as Google, Amazon, Facebook (sorry "meta") et al.
Windows and Office (not to mention LinkedIn, GitHub and Azure) bring them vast amounts of data on everyone. Windows has ads right in the start menu these days, and even without the ad money I'm sure there is a queue of 'data brokers' who would pay for access to that trove.
Microsoft "love Linux" just so long as it's running in the nice white-box-environment that is WSL. But they seem determined (via Secure Boot, UEFI, and other, more insidious ways) to make it as difficult as possible for the general public to run Linux natively on their computers. Why? Because they would evade their surveillance by doing so.
Microsoft love things like Snap, Flatpak, Docker, Electron etc precisely because they break that UNIX/GNU philosophy of avoiding bloat by having small, auditable programs which each do one task and all rely on eachother - moving towards the Windows-style philosophy of "here's an installer, it's a 700MB EXE, just run it, don't worry it's from a 'trusted source', see here's a nice certificate of authenticity for you.." (note how Microsoft loves to tell people who to trust..)
Since their purchase of GitHub, they also seem to be treating Open Source devs as their own unpaid employees, hoovering up all their efforts into one enormous AI-driven deniable-plagiarism machine.
With Web Apps and Electron Apps, it's a Black Box for you and a White Box for Microsoft.
You should try to estimate how much value you get from your Dynamics 365, versus how much value It gets out of You.
how much systemd is "hated" yet how prevalent it is in the Linux distro world
Largely explained by the fact that the largest distro made it a de-facto standard on both of their releases (CentOS and RHL). And then taken upo by others because they want to use Gnome and, to a large extent, that depends on 'features' provided by the systemd stack.
Hence it infecting Debian and leading to the creation of Devuan (which I use if I can't use FreeBSD).
As to devs listening to end-users - since when have the larger distros *ever* done that? Last I can remember doing that was Mandrake. DedRat certainly doesn't otherwise they would have let the pestiferous P loose on the disto.
Among many other reasons, systemD is hated *because* it is prevalent in Linux distributions.
And further, because it has strayed far, far beyond the scope of the thing it replaced.
It is a fallacy to assume distribution devs necessarily love systemD; no doubt some do, especially if they had a hand in systemD development. However, the presence of systemD in many distributions is attributable to Red Hat being the proverbial 800lb gorilla, and driving inclusion of the thing. For better or worse, Red Hat at least heavily influences, if not outright determines, the direction and decisions of many Linux distributions.
Only the systemD devs, and poettering, know why they don't consider the needs and feedback and input of non-systemD devs. Presumably Red Hat (marketing?) has a hand in that too, e.g. politics and control and other shenanigans may be in-play as well. The rest of us can only speculate on the reasons, and it would likely not be positive.
Either that or a few fairly critical bits of software made it a dependency which meant building a distro without it became very hard work as you suddenly had to maintain forks of those things.
Plus, it mostly works ok so from a practical point of view that's a lot of work for not much gain.
It is hated by some, but it is not universally hated.
I personally, don't care. I don't hate it, but I don't write home about it either. It is what it is.
It's never got in the way, it's made creating services (for me at least) a lot simpler and the only component that has ever pissed me off is the built in resolver, which can be switched off if you want.
What systemd seems to have done that other init systems could not is standardise the init system across distros. Systemd on Debian looks and works the same as in Arch etc. That didn't use to be the case.
I think a lot of the hate stems from systemd making what used to be arcane and tricky into something relatively easy.
I don't think Liam's mother-tongue is American English. I wish the changed house style were changed to allow article authors to use the English they are most comfortable in, so American authors use an American idiom, Australians Australian, Indians Indian, and Irish Irish, and so on.
My first thought was questioning what 'The Fall' had to do with systemd. Their cover version of systemd would be interesting.
Finally, an honest American. American English only exists because Americans can't spell.
Also, there is no such thing as American English.
We should handle English localisation the way Chinese localisation is handled.
English (British)
English (Simplified)
This would cover a hell of a lot of cases with ease. For example, in my (admittedly subjective) experience, in South Africa most of the people I've met set their computer language to English (United Kingdom). I'm not sure why, I've never asked...but it's extremely common, UK layout keyboards are also very common. I've spent 2 years spread out over the last 10 years in South Africa and the cross section I've interacted with is varied. I would estimate that more than 50% do this, but probably less than 75%. So a significant majority.
India is very similar. My experience here is a lot more limited, but based on the immense web traffic logs and analytics I sort through on a regular basis, I see at least 60% of the Indian traffic that hits my servers as having their language set to English (UK). Bizarre, but not uncommon.
It'd be interesting to see what the most commonly selected locales are worldwide (correcting for English(US) because it is the default in most cases and would naturally be higher). I reckon the results might surprise us all.
I think it's a bit pithy to say we're confused by "fall" — we've been exposed to US writing for long enough that it really shouldn't result in more than a quick double-take. It's essentially archaic in UK English now, but it's still a word that anybody with an ounce of literary sense would be able to grok correctly in context.
Obviously if you're talking about standardisation your points are perfectly valid.
"we've been exposed to US writing for long enough"
I often wonder whether it's about cost savings or assumed intelligences levels of the audience or simple familiarity, but it does seem that English speaking nations are expected to understand "American" English, but more rarely the other way around. Exports to the US seem to be more frequently translated into "American" English or, in the case of some TV shows, completely remade for a US audience. Even show or book titles might be changed to suit the perceived US demographic.
I note that even Americans posting here are often dismayed by the apparent dumbing down of the education system.
Then it gets more convoluted by the time you factor in American English dialects. Some regions of the US use autumn instead of fall. There might be a dialect seen more often by those outside and inside the States because of commerce style-guides, but there is a lot variation regionally in usage and spelling.
My only issue is people who act incensed a another culture using a different dialect of English. I can never tell if they are being contemptible or daft. If you have a middling grasp of the English language and reading comprehension you can almost determine the word usage through context.. It is is English. If there ever was a mongrel of languages, it is English.
I've noticed that the imposed style guide seems to go further than simply using US spellings, it seems to be the case that if there's a choice of words and spellings, there seems to be a strong preference for choices that are only valid in US English. That and a few national stereotypes thrown in for good measure makes it look like someone's trying a bit too hard to make a point.
There are publishers who do this. Of course, in Norway, there are even two written languages, so I am used to reading articles in both bokmaal and nynorsk (but I was more referring to Lost Art Press, who publish the books in the native idiom (American, British, ..) of the author.
Jeg har gått från Skien til Kirkenes, og jeg var i Bergen, Ålesund, Trondheim, Bodø, Tromsø, Hammerfest, och mange mer bye i Norge, och hver Norske man och kvin jeg har treft snakt bokmål. Jeg trør at jeg har aldri hørt Nynorsk i vilden.
(Sorry for the horribly mangled language. I am 20 years out of practice.)
Oh good. :-)
These days, unfortunately, all my language-learning efforts go into Czech, compared to which to all the half-dozen other languages I've ever studied seem easy.
For clarity, I don't mean Czech is harder than any other language I studied. I mean it's harder than *all* the others *put together*.
As an example: I strongly suspect that Czech has more plurals than English, French, Spanish, German, Swedish, and Norwegian *in total*.
There are 4 genders, and at least 2 plurals: one plural for 2-4 and a different one for 5+ (in the genitive declension). Most regular nouns end in one of 4 letters typical for each gender. Combine that with 7 cases, and that makes some 21-28 regular low plurals, plus another 16 regular genitive plurals... but there are lots of exceptions too. Adjectives much agree for gender _and_ for number. In verbs, there are 2 classes of tense, imperfective and perfective, and past and conditional tenses take a gender as well.
It is almost unbelievably complicated.
Jeg ønsker å bor i Trondheim.
but try Turkish with its vowel harmony!
Or Scots Gaelic (or Irish or Manx) with the thick and thin vowels and words changing depending on how they are used:
Mor == 'Big'
But if there's a comparator then it becomes 'Gle mhor' (very big) and, of cause, adding the h changes the sound (mh == 'v' almost always)
Bring back ancient Hebrew where there was only two tenses - not complete and complete.
:-O Frankly, that sounds like a grammatical horror show.
Everyone in the English-speaking world ought to be grateful that the east midlands dialect of English prevailed and effectively became the national language many centuries ago because it was basically a very simplified mongrel mixture of Anglo-Saxon and Norse so that all the complicated stuff thankfully went out the window, e.g. the three gender noun system and the rest.
That is why today Beowulf in the original Anglo-Saxon is completely incomprehensible to modern English readers because the language changed so much back then.
Nynorsk and Bokmål are written languages. There is no official standard for the spoken word. People are usually speaking in their local or regional dialect, even in broadcasting or in the parliament.
Og det var lett å lesa det du skreiv! Om enn eg er usamd i påstanden din :-)
I also fully understood what you wrote as well.
The interesting research work of Lundquist, Rodina, Sekerina, Westergaard, Klassen and others indicate that dialects are still evolving today and that the spoken feminine gender in dialects in the north and in urban areas is on the way out, particularly among younger people.
In that respect, those changing spoken Norwegian dialects are becoming more like the existing situation in Sweden and Denmark where there are just two spoken noun classes, common (merged masculine and feminine) and neutral.
"Fall" might be thought of as American English but its actually a regional word. On the west coast you're more likely to find the word "Autumn": used (its also closer to the Spanish word "Otono").
Most of the time if you use a certain amount of imagination you can figure these things out.
Native Spanish speaker here. "Otoño" has the same root as "autumn" (namely, Latin "autumnus"), and learned that translation from my English teachers at school. I learned of "fall" after moving to the US mid-Atlantic, and also that people here don't seem fazed when hearing one or the other -- it's all the same to them.
> I don't think Liam's mother-tongue is American English.
You're right, it's not.
In my last two full-time tech-writer roles, I was required to write in US English too. But documentation is intentionally flat and affectless, and that means it is just spelling.
Journalistic pieces, not so. :-/
"Autumn" is from the Latin autumnus, via French automne, and is not very English at all. First commonly used in its current form in English in the 1600s.
"Fall" is from the Old English "feoll" (pre Norman conquest), and is very English indeed, although the time of year was more often referred to as "harvest", or hærfest, until the mid-1500s when they started calling it "fall of leaf".
So, as usual when the Brits bitch about a particular way the Yanks use English, their version is actually French .... but the Yanks are still using English. Go figure.
The typical hobbyist who wants to compile their own kernel has no reason to leave secure boot enabled, unless they are dual booting Windows 11 I guess.
Apple ARM Macs allow a way in firmware to set secure boot per OS, so you could leave secure boot enabled for macOS but disable it for Linux.
In one part of the linked article, it reads, "The identification itself does not matter much, but some of the later values are important: for example, we do want to make sure “1.3.6.1.4.1.2312.16.1.2” is included in extendedKeyUsage, and it is that OID that will tell shim this is meant to be a module signing certificate."
Further down, the linked article reads, "Now, let’s enroll that key we just created in shim. That makes it so it will be accepted as a valid signing key for any module the kernel wants to load, as well as a valid key should you want to build your own bootloader or kernels (provided that you don’t include that ‘1.3.6.1.4.1.2312.16.1.2’ OID discussed earlier)."
So, should we, or should we not, include that ‘1.3.6.1.4.1.2312.16.1.2’ OID? What is the difference, if any, between a "module signing certificate" and a "valid signing key"?
When the concepts are muddy, the code implementing them is likely to be "wrong" in one or more ways.
[Author here]
> How does all that cryptographic stuff relate to the ability to compile your own?
See my earlier story for what relatively little is known so far.
I think it will still be possible, but you will have to disable all the firmware security measures -- and possibly roll your own instead.
It will make enterprise distros even more locked down than they already are, and that will probably apply to the mass-market desktop distros too: ChromeOS, openKylin, UOS/Deepin, etc.
Remember that the world's biggest Linux market is in China, not in the West. I suspect that ChromeOS has 10× more users of desktop Linux than all the other Western distros put together, and China probably has double as many desktop users as ChromeOS.
Or, you could spectate on Lennart, Luca, Matthew and others discussing the details, if you like:
https://lwn.net/Articles/912370/#Comments
generally speaking, I don't think any general purpose Linux distribution is likely to require any of this stuff to be used. It is all optional support. If you don't like Secure Boot and TPM and all this other bootchain security stuff, you can turn it off, and Fedora or Ubuntu or Debian or Arch or Mint will still boot. We have no reason to stop doing that.
All the TPM/disk encryption interaction stuff is only relevant if you actually want to encrypt your disk. If you don't do that, none of it matters.
There really are people out there who want a secure boot chain. They want to know that if they leave their system unattended, it can't be trivially compromised or exfiltrated. If you don't want that, that's fine. But because it's a thing people want, people are going to work on it.
Now that Lennart works for Microsoft, I see this as a backhanded attempt to get every system dependent on TPM-2 hardware.
As we all know, W11 is seeing very little takeup. Making the Linux kernel require the same hardware as Windows is IMHO, just wrong.
Perhaps (no evidence) this is just a prelude to Windows moving to a Linux kernel.
Considering that TPM does absolutely nothing for the end user, it has not been surprising that uptake of Win11 and it's apparently hard requirement for TPM has been poor.
TPM is an attempt to benefit the corps that funnel media to the end user, to benefit the corps that funnel subscriptions to the end user, and possibly corps to manage their employees.
Nothing actually genuinely useful to the end user that isn't already done better locally.
I for one will continue to use Linux versions unpolluted by Poettering et al. Shoddy coder, shoddy code, unwanted cruft.
As for it being "optional"? Just you wait to see how optional it'll be when spinning anything up on any MS cloud instance.
No thank you, not needed, not wanted, and TPM will continue to be disabled in my PCs' bios.
"Why should dual-boot break if the keys for both kernels are enrolled in the TPM / secure boot keystore / wherever?"
I dunno. And I don't wish to know. Might the second install blast the keys used by the first one? I once tried dual-booting a couple of distros, sharing the same second HD for /home. Each blasted the device ID used by the other and I had to resort to gparted every time I changed distro. Stranger things have happened. Do you trust Poettering to never break the chain?
[Author here]
> Why should dual-boot break if the keys for both kernels are enrolled in the TPM / secure boot keystore / wherever?
Um. That would appear to assume that you are dual-booting two Linux distros, and indeed, that both are from the same vendor, as the new functionality intentionally blocks other distros from decrypting a volume encrypted by one particular vendor.
I submit that dual-booting two different OSes is more likely, for instance Windows and Linux. In which case, both would have to understand and use the *same* encryption system if one wanted to use full-disk encryption (FDE).
Since they do not -- Windows uses Bitlocker and Linux uses LUKS -- then you can't do FDE with multi-OS dual boot, as far as I know.
What you could do is have encrypted *volumes* but not the whole disk. That seems perfectly doable on the face of it, but it's not much use, because companies that mandate disk encryption typically require FDE, in my experience. Partial or per-volume encryption is not acceptable because it does not comply with the rules.
As such: no, you're wrong, dual boot would indeed break FDE, unless it's 2 copies of the same distro, which seems fairly pointless to me.
Well, I'm going through the 'fun' of trying to dual-boot two Linux distros with FDE* - an old one, which is stuck on Grub 2.04** and a new one on Grub 2.06 . There's no particular good reason I'm doing this, other than curiosity and bloody-mindedness - a bit like wanting to dual-boot Windows 7 and Windows 11. It's basically an upgrade, where I want to keep access to the old system and it would be convenient to have the both on the same SSD. I've got the new distro installed on a spare partition, divvied up with LVM, got another partition with LUKS, and LVM (it's a big SSD), copied stuff across, fiddled with fstab, crypttab, /etc/default/grub, chrooted, built a new initramfs, and updated GRUB and the thing still fails as the encrypted disk UUID is not being passed to the init. I can manually edit grub.cfg, but that won't persist through new GRUB updates, so I'm trying to work out where things are going wrong. SecureBoot is disabled. Other people play golf as a hobby.
You ought be able to choose any one of several authorised and authenticated paths to boot the O/S of your choice on a piece of hardware, which should allow dual booting. If the SecureBoot architecture doesn't allow for this, it's flawed. Obviously there's merit in enabling a lock-down so only one O/S can be booted, but being restricted to one really should be an option, not an unchangeable default.
*'Full Disk Encryption', where the ESP is (obviously) not encrypted, but the 'rest of the device' is a LUKS encrypted device. Once LUKS is opened, the device has LVM in the next layer, which then has volumes for swap, boot, root etc. ('rest of the device' is in scare quotes because actually, it's not. About half is some other partitions which are useful to have lying around on the same SSD. But the principle holds.)
**Stuck, yes, because GRUB 2.06 doesn't build on the old distro. Lots of 'make' errors, which I am invited to submit as a bug.
companies that mandate disk encryption typically require FDE, in my experience
FDE on MacOS depends on which version you are running (and what hardware - my M1 Mac has multiple volumes most of which are pretty small. And the recovery partition on all versions and hardware isn't encrypted at all..
The most pressing reason to dual-boot is that the user wants to use the full bare-metal capabilities of the hardware. At a previous job, I dual-booted my work laptop because I wanted a OS running on my system which was for personal use and unconstrained by corporate IT, and said constraints were restrictive to virtual machine usage. Some people might be driver developers who need access to the hardware for testing their code. Etc. Those are just some use cases which spring immediately to mind; I'm sure other commentards can suggest others.
Because VMs don't always work when you are doing bare metal stuff, like installing firmware in a device.
For example, My 3D printer and my Brother label printer both reset and drop off the USB and reconnect when you say "I'm sending you firmware"
This gives all the VM systems I've tried (QEMU & VirtialBox) the shits, so I've had to use my actual Windows laptop.
Or apparently if you want to run Adobe creative suite. Officially in writing from their support monkeys, they are claiming that VM installations are now unsupported for all Adobe CS products but the standalone version of Acrobat. They never say it out lour but I'm pretty sure its their toxic waste DRM trying to keep people from snapshoting the guest OS and installing the trial over and over.
(Not that I want to run that crapware these days, but sometimes you get paid to.)
Finally the promise of the future is here. I'm so happy this is at long last available. Everything will be nicely locked to two or three hardware vendors, a nice OS duo- or triopoly, all the AMZN content at our fingertips, bots are going to be filtered out, no more captchas - remote attestation from a approved edge providers through approved carriers and approved last mile providers through approved and signed OS and one true good browser right to the CPU core, where the digital signature will live right next to the reliable and secure out of band management system and then back again, everything signed and secure and reliable (apart from the signed and approved backdoors for secure and relieable and accountable government agencies).
I'm sure corporate Windows users are going to love this! Wait...
It isn't the secure-boot system itself, but how it's implemented.
You can have a dystopian system like that, but you can also have a system where you trust nothing but your own personal signing key* and have *that* sign things downstream.
____
* your key, your certification authority, OpenSSL, your standard C library, the OS, the computer generating the key, the hardware manufacturer, and the country that said hardware manufacturer comes from. Also see the famous Reflections on Trusting Trust essay/talk.
Of course, I can even get the whole world under lock and key, by putting myself in the cage.
What I was pointing at is the Wintel creep and where it's headed with systemd.
My personal belief is there will always be token plurality available, just like with Firefox, which reportedly lives mostly off Google's money. It appears to be cheaper than monopoly lawsuits of the Microsoft Internet Explorer fashion.
I spend my days, when not coding, having to administer a fleet of Windows laptops.
There are a number of Dell laptops that tend to suffer TPM failures, rendering the OS useless. Especially as we march towards Windows 11 where it requires at least TPM v1.2.
If the TPM fails, it's fucked. The laptop is scrap essentially if you need to run Bitlocker or any sort of encryption on the disk.
Now we're trundling in to a Poettering world where we're having this shit enforced on us? Only those who have not had to deal with the delight of a dead TPM on a laptop would ever think this implementation is a good idea.
-> Now we're trundling in to a Poettering world where we're having this shit enforced on us?
You can always fork it. Nobody is forcing you to use a vendor-supplied kernel.
As I have mentioned several times previously, the top contributors to the Linux kernel are corporations. They pay people and those people put in what they want to put in or what they are told to put in.
Linux today is not the Linux of 20 years ago. Sure, anyone can make a new distro with this feature and that feature. They generally don't last long.
Sure, anyone can make a new distro with this feature and that feature. They generally don't last long.
EL Reg normally gives us two or three starry-eyed reviews of Ubuntu remixes each week. It would be interesting to know how many of them ever make it to a second version. It might happen sometimes, I suppose.
Just that I'd put my money on the lettuce.
But one or two will rise to their wobbly baby deer legs and surprise by beating the odds and going the distance.
That said, It would probably be better if the dev community wasn't spread so thin, and if there was a better community support and mechanisms to flag up good ideas from the forks and get them back into the mains more quickly.
That is the Linux way. We are told time and time again that if we don't like something we can take the source code, fork it, and build our own version. Extending that for a moment to distros, that is why there are so many distros. Some distros last far longer than others, others are built to specifically not have a particular subsystem (eg Devuan).
What we seem to have, what I see a lot of, is quite a few Linux people thinking they can influence the way Linux should be - it should not have systemd or it should not have a secure boot feature, for example. All they have to do is pick a distro which meets their criteria and use it, or if it does not exist they can build it. Open source does not mean you have control over what somebody else does, yet that seems to be how some people think it is.
So, you wrote that forking is "pointless and naive". In some cases, with the plethora of distros adding one more Ubuntu remix with marginally different bells and whistles, I think it is pointless. But in general? You seem to be turning the open source world on its head.
Edit: For the record I have expressed several times my dislike for systemd in general and its tentacles. This is just another tentacle.
There are. You have the option to print them off and keep them in a secure safe somewhere, or you can use a 3rd party application such as Sophos.
Except, in my experience, there have been times the latter doesn't work. I've had 3 laptops require access to the bitlocker key and in those 3 instances the key Sophos listed didn't unlock the laptop.
Experience of the former is that, while it's more rudimentary, the key can sometimes not work either.
So what we found was that when a laptop got to this state, and wasn't accepting the bitlocker key, we took a random selection of machines and tried to access them with the key. On those machines we could access them.
It then happened again to another laptop, one that didn't get tested, so again we tested more machines and never encountered the problem.
Since then, it's not happened since. I've no real explanation as to why. It's not so much that it's annoying, but it's a concern that it's possible that a hard drive is unrecoverable because of an issue with the key.
"If the TPM fails, it's fucked."
Agreed. As a field engineer, I've come across a number of laptops where it's the TPM that has failed. It means a new system board, every time. Luckily, it's not that common and the clients I deal with just re-image anyway when the system board is replaced. I'm guessing someone decided the few pennies they save by TPM not being a plug-in module is more than the cost of a few warranty replacements when the now integrated TPM module fails.
I somehow lost a tiny bit of trust in Debian for not being on this list.
Dealing with systemd fuckups, especially it's inability to automount cameras and USB keys when I connected them, was the reason I finally left Debian for Devuan.
systemd is the reason Devuan exists.
One of the best minimalist distros out there, rock solid for decades, but the community is pretty small these days, and if life happens to anyone it may be spread to thin to recover. A few extra bodies and a baseline trickle of cash and this would be a great base for servers again.
I mean I still use it to run my own stuff, but I stopped using it for stuff at work because of the medium term risks of the project losing support. I can't justify the risk having to port a bunch of boxes to another distro if Patrick has to step away, but I still trust Slackware to do the job about as well as a BSD.
Hopefully he is getting more support these days, and hopefully the squatter that was collecting donations in his name has been shut down(double check those donation links).
The fact that people are pissed about real problems that Systemd and it's team have and continue to cause is a different story.
So yeah, I'd be happy to leave behind the bickering, but it won't stop unless the underlying problems are addressed. That would be like saying that half the sandwich you are eating is covered with mold, but still eating it, and not even trying to scrape the mold off.
Even without a predesignated backdoor, hardware can never be "theoretically" secure in the same that encryption is theoretically secure.
Remember that Israeli company that unlocked the phone of the LA workplace-terrorist? Once figured out, it could be applied to any same model phone. Perhaps they had access to someone involved with the original design, who knew the weak points, or perhaps not.
In case of a stolen computer boot-time-entering-of-key is the only theoretically secure method.
However, even with boot-time-entering-of-key, what if somebody surreptitiously changes out the BIOS with an evil replacement? There is no such thing as total security.
Is now shipping in the new systemd. I wonder if the parts that needed work have been finished in the last week since this article.
Oh wait, nope nothing has been fixed, and now that Lenny has started coding, he won't let anyone change anything that would cause him to rewrite his first draft. Classic LP.
"Not many distros use this, though. So far, The Reg FOSS desk has only seen it in Pop!_OS."
We put this in Fedora a while back, I think. Most of the bits, anyway. Around when Baytrail devices were briefly a thing (several of those had 32-bit UEFI firmwares). I don't know the last time anyone tested it...
[Author here]
> You have always been able to boot a 64-bit Linux kernel from the 32-bit version of Grub
I think you misunderstand two different things here.
We are not talking about GRUB.
[1] This is talking about systemd-boot which is an alternative to using GRUB.
[2] The scenario being discussed is about loading a 64-bit *kernel* from 32-bit *UEFI*.
This is complex and tricky; as an example, see Ubuntu's docs:
https://help.ubuntu.com/community/UEFIBooting
>> The scenario being discussed is about loading a 64-bit *kernel* from 32-bit *UEFI*.
>> This is complex and tricky; as an example, see Ubuntu's docs:
Err... sorry, booting a 64-bit Linux kernel from a 32-bit **bootloader** on 32-bit UEFI has always been easy. Systemd seems to make everything difficult because (IMHO) it is shite. A reasonable idea (questionable??) that is poorly implemented. The bug list for this guff is "not pretty".
What does systemd deliver for most folk that's good?? I've yet to see anything but significant problems.. I refuse to run this garbage on anything headless or remote as traveling several hours to fix stupid errors **isn't happening**.
Just my 5p worth.
B.
Linux has traditionally stayed mostly true to the UNIX ethos of having lots of small, preferably interchangeable, tools do specific things extremely well as opposed to one giant blob of code that everything has to rely on. Systemd goes 180º against that ethos. In many ways it's just as bad as a BLOB - a giant tangle of garbage code that you are more or less forced to put up with if you want to deal with a commercially-suportable distribution (sorry, people, this is important for a lot of businesses even if it's just a useless security blanky for upper management).
Debian was the first distro to support this, to the best of my knowledge. I wrote a trivial kernel patch and some grub changes to make it work better!
We still support this feature today, although most of the machines that ever needed it will have been killed off by now.
The missing pieces for truly secure boot on Linux are finally starting to appear one by one.
Now distributions need to start bundling a basic initramfs with the kernel image and sign the resulting file, then offer loadable initramfs extensions for situations where the basic initramfs is not sufficient.
With the added TPM functionality we should be able to implement passwordless (from users' point of view) disk encryption like every modern OS offers.
Finally, no more waiting minutes for GRUB to unlock a disk in multi-user environments!
"Secure" does not mean the owner of the laptop can't change it. That is "control" (by a third party) .
Don't mix the concepts.
Not having access is loss of "control". If anything it's insecure.
If you throw your laptop into the ocean its not "more secure": you have lost access, and lost control.
Not being able to choose your kernel will never be "more secure" that's marketing nonsense. Someone else is choosing it, and you are being stitched up.
Perhaps slightly beside the topic, but I have suspicions (although correlation is not necessarily causation)...
I have used Skypeforlinux on Devuan (with Trinity desktop) for some years to talk with family on the other side of the world.
In the last few weeks Skype started crashing.It's as if it is missing something. Both on laptop and workstation.
I made another partition on the laptop and installed a Debian systemd distro (Q4OS with Trinity Desktop.
Guess what? Skype works on that.
UKI is short for "Unified Kernel Image" and combines the Linux kernel and initrd into a single file, along with some other smaller components, allowing the whole thing to be cryptographically signed.
Great. Now how about storing the resultant UKI in the EFI System Partition, eliminating the need for a boot loader?