Secure boot?
A job at IBM? Only if you want to get the ‘boot’ after your dinobaby birthdate
Although it's not long after job cuts at Red Hat, the company's team in Mexico is looking for a developer to work on the Linux bootloader stack. The role has been open for a couple of weeks, but if you fancy your chances, it's worth a go. Red Hat software engineer director Christan Schaller tweeted the role last week. As we …
"for some of them, enabling UEFI boot is either still in beta or remains an extra-cost option"
Okay, to have an option that is in beta is perfectly normal, no issues there (although you might want to get something done to push it into full relase one day).
But to have something that works and is apparently sorely needed (those UEFI motherboards are already out there) and you don't bake it into your release ?
This is Linux, not Windows or Apple. Get that into your codebase and tout it loud and clear. Isn't that supposed to be a technical advantage over your rivals ?
The hardware emulated by virtualizarion platforms is actually pretty old (check out the chipsets that VMware, virtualbox and proxmox present to guest OSes)...
There's no real advantage to emulating the new stuff precisely because it's all virtual anyway (so none of the physical limits that applied for a real G43 chipset (from the Core2Duo) actually apply)....
[Author here]
> UEFI predates "Cloud" I am actually shocked that BIOS is still in play in modern data centers.
No it isn't, and no it doesn't.
UEFI is the x86-64 version of EFI. EFI was an Itanium standard: it was the firmware for Itanium computers.
The EFI project began in 1998. One year later the first version of VMware came out. EFI (*not* UEFI) and VMware are contemporaneous.
The first server version of VMware, GSX, appeared in 2001, the same year as the first Itanium boxes started shipping.
AMD Opteron was launched in 2003:
https://www.theregister.com/2003/04/22/amd_launches_opteron/
Machines with UEFI started shipping in 2003:
https://retrocomputing.stackexchange.com/questions/24711/what-was-the-first-motherboard-with-uefi
So, no, you have this back-asswards. Server x86 virtualisation came first, then came Itanium with EFI, then came x86-64 with traditional BIOSes, then last of all came UEFI.
> The other point I would make is that "One of the drivers for UEFI support" is getting TF away from GRUB.
Again no, but this time it is at least debatable.
I have seen one distro that defaults to booting with something other than GRUB: System76's Pop OS, which uses systemd-boot. IMHO this is not an improvement.
ZFSBootMenu is also not yet mainstream, because ZFS on Linux isn't.
Nice response, that's a very neat synopsis of Virtualiztion.
I picked AWS as the start of cloud, as the first (that I am aware of) commercially available VM service.
At that point, if you're building an x86 DC and are several tech generations removed from needing BIOS, I (still) don't see the reason to be building with it when UEFI is available. And even if you are upgrading the HW of an existing DC, is there some impediment to ditching BIOS ?
Grub is more of a personal axe, but my life has been much better since moving to rEFInd. Maybe a commercial env has other considerations, but Grub is not admin friendly - too much of the boot config is left to the OS distro.
"I picked AWS as the start of cloud, as the first (that I am aware of) commercially available VM service."
That you are aware of.
People were working with primitive virtual machines in the late 1950s and very early 1960s. In fact, it was a major selling (leasing[0]) point on IBM's System/360 in 1964. The 360 would emulate the 1401, so all your old code could still run, while making the modern 360 code available, too! (WOW! Whodathunkit? What an age we lived in!)
IBM continued research through the 1960s with the M44/44X, through CP-44 and finally culminating in the System/370 variation called VM/370 in 1972. (Note that the "VM" in this case stood for Virtual Memory, akin to BSD's vmunix in 1979 ... the hypervisor was part of the control program ("kernel" in modern parlance, kinda, if you squint).)
Absolutely none of this stuff was invented on or for the PEE CEE architecture.
[0] Also available through dial-up (or Switched-56) timeshare ... The current generation didn't invent "cloud", either. See: Service Bureau
UEFI is already fully supported in Linux.
So is BIOS.
What is not fully supported, is virtual UEFI machines on many popular hypervisor/virtualization and cloud platforms....
This doesn't mean the hypervisors can't use UEFI physical hardware (they very much can)...
It just means that if your OS no longer supports BIOS boot, it can't be a guest OS on these platforms (or it requires special settings rather than working off the bat)...
And RedHat doesn't want to, for example, kick itself off GCP, AWS or Azure because it decided to go UEFI-only before the 'bigs' were ready for that...
Seriously? What right do they have to bitch when they're so cagey about their source? And they perpetuate trash like systemd?
Personally, I wouldn't accept patches from Red Hat until they stop being a dick over their source code.
Stop and think about this for a while, and you'll realize it's utterly incoherent.
If we (I work for RH) wanted to be "cagey" with our source, we wouldn't give a flying squirrel whether upstream would merge it, would we? We'd just make all our changes downstream. We're only trying to get them merged upstream *specifically because we're trying not to be 'cagey' about them*! If we were being "cagey" about it, we would specifically *not* want upstreams to merge our changes, would we? Them doing it would be *bad* for us.
Upstream first is a requirement inside RH: major changes go as far upstream as possible, first, before they go downstream. Cases like this, where an upstream is so slow about landing them that we have to maintain a sorta 'fake-upstream' fork to base our packages on, are very rare (I can't think of any besides grub). It's not an RH-specific problem either; other distros also have to carry huge patch sets on grub because of it. Here's Debian's (page 1 of 3!):
https://sources.debian.org/patches/grub2/2.06-13/
I'm pretty sure the original comment was referring to the recent RHEL licensing fiasco, not reducing your maintenance cost by requiring upstream contributions first. Not saying it's bad or malicious, but we all know that nobody likes maintaining a huge set of patches for a package, so let's not pretend that the requirement comes from anything other than sane approach to packaging.
I understand what the OP was referring to, but it's fundamentally incoherent to suggest that we should be "punished" for the RHEL source change by...upstreams refusing to merge changes we are intentionally sending upstream to make them open for everyone to use. It just doesn't make any sense.
You might think it's a common sense requirement, but holy cow, no. The idea that RHEL needed some secret sauce was deeply held at RH for years. One reason Fedora and RHEL have completely different testing processes is that for years it was policy *not* to upstream any RHEL tests to Fedora because it'd be reducing customers' 'incentive' to pay for RHEL. There absolutely were cases where changes were intentionally kept downstream and *not* upstreamed because of the idea this provided some kinda justification for RHEL sales.
Of course, this wasn't the idea of any *engineers*. As you say, nobody wants to actually have the work of maintaining a giant patch set. (Well, waaaaay back in the early 2000s it was kinda more common and could be a kinda badge of pride for distro maintainers, but that mindset went out ages ago). But it wasn't the engineers setting the strategy, of course. If engineers set the strategy the RHEL source change wouldn't have happened (but also, the company would probably have gone broke decades ago, because engineers are terrible at making money.)
Convincing folks that this was wrong and the best thing for everybody is to upstream changes was not a minor effort (which is why it has a catchy name - "upstream first" - and internally it's a whole darn project with documentation and training courses and the whole nine frickin' yards). We're pretty good at it now, but it was absolutely a process to get everybody to buy in. (Not taking the credit for that myself, it wasn't my project.)
(but also, the company would probably have gone broke decades ago, because engineers are terrible at making money.)
A lot of people forget this. It's a good lesson to learn, a lot of the 'idiots in suits' do actually do the important bit of guiding us lot to make the things that make the money. ;)
I cite the Commodore 128 as a fine example of this. Technically impressive, but never should have been built or released.
"using the existing U-Boot firmware, most often seen on Arm-based systems, to emulate UEFI on BIOS system"
Is that true? Or is that the wrong way around?
The Win10 bootloader emulates UEFI on BIOS, it's not most often seen on Arm-based systems, and if that's a remarkable workaround, it's quite old now.
The irritating thing about the Win10 bootloader is that it lacks a BIOS emulation layer. Your computer boots, loads the UEFI emulation layer, allows you to select a boot OS -- and then for another OS, does a hard boot again to *not* load the UEFI emulation layer.
Using UEFI to load a BIOS emulation would make sense on an Arm-based system without native BIOS support. Is that what U-Boot firmware does?
I had a look at the referenced Intel paper: they wanted people to stop booting into DOS to do hardware tests and disk partitioning. They thought that once the supply chain stopped doing that, they could get rid of the BIOS. I didn't notice a reference to GRUB
EFI emulators have been around for ages.
They're one of the ingredients needed to get a working Hackintosh, given that Apple was the first onboard the UEFI train and PC only followed almost a decade later.
Pretty sure those can also be used to bring up a UEFI linux box.
[Author here]
> Or is that the wrong way around?
Um. I am at a bit of a loss how to respond to that comment, because it seems to be entirely founded on misinterpretations, getting stuff backwards, and unsubstantiated claims.
One example: I wrote about UEFI emulation on BIOS systems. You then ask about BIOS emulation on UEFI systems. I didn't say anything about that so I can't answer!
If you can rewrite it, with some links, and double check that you've read the piece correctly, then I can try to answer.
[Author here]
> Ok, I asked a question that was not covered by your article.
No... the article was about how water boils to form steam, and you asked about the freezing temperatures of water ice versus dry ice. It's not something I didn't mention: it is a whole different subject that is unrelated.
> I should ask only questions that are already answered?
You are of course perfectly free to talk about anything you want wherever you want, but there are probably better places for some of it?
> The question is, have I got stuff backward, am I misinterpreting?
That's 2 questions. :-)
But I'll try to answer anyway.
> have I got stuff backward,
Yes, you have.
> am I misinterpreting?
Yes, you are.
> Or has the original author got it completely wrong?
Who do you mean? I am the original author.
The Fedora proposal around May/June was this:
[1] Take a tool called Das U-Boot, mainly used in the RISC and mainly ARM world
[2] Use it on x86 Linux, where it is not normally used or needed
[3] Run it before the OS it when booting on BIOS computers
[4] Use U-Boot to _emulate_ UEFI on these BIOS machines
[5] So that Fedora can drop BIOS support.
Emulating UEFI on computers that are booting in legacy BIOS mode.
You asked about emulating BIOS on computers with UEFI firmware, which is not the subject here, not a proposal for Linux, and is not needed.
It's needed for other OSes, such as DOS and OS/2, but not for Linux. So it is a thing but it's totally irrelevant here. That is why I am confused. I wrote about going from B to A but you asked about going from A to B.
Is that clearer?
'Legacy' stuff always hangs around way longer than people expect it to. I perhaps wouldn't go quite so far as to confidently bet that the requirement for BIOS-mode booting for virtualization will still be around in (to pick a not entirely arbitrary date) 2038, but, on the other hand, I wouldn't be altogether surprised if it still hadn't quite entirely gone away by then…
"I have a feeling that in 2038, a non-trivial number of *nix systems will suddenly think it’s 1970"
Nah. It'll be fewer than those that had Y2K issues because we've had 38 more years to bring everything up to date.
My punched card (and tape) support is handled by IBM 1401 or DEC PDP-11.
UEFI is worthless. It provides nothing of value.
...apart from the ability to write much richer 'pre boot' environments, and standardisation for things like large hard discs and pre-boot network support.
It's also cross platform and open.
SecureBoot is STILL a windows only joke of a technology.
It's not, it's usable under Linux too.
And whilst it's not something you or I may want on our desktop systems, as embedded technology improves and is increasingly networked, I'm not averse to a software stack that's signed from the firmware up.
BIOS still works fine.
For what level of fine though? If you're calling bios interfaces you're dropping into real mode (which may not be around that much longer), otherwise you're back to hoping that the extended interfaces for finding things like the ACPI tables have been correctly implemented, and the tables they point to aren't junk... (and also maybe that whatever boot loader you use has found them and gave you a pointer...)
Okay, I suspect people are writing just as poor UEFI implementations too, but again, at least that's an open, extensible, not tied to x86, specification.