WTF ?
Who runs Fedora on a Raspberry Pi ???
Fedora 41 is approaching the home stretch, but is currently beset by problems around Raspberry Pi support. Although Fedora 41 isn't due until late next month, it's nearing the beta stage and, at the moment, most of the bugs blocking the beta revolve around one problematic little computer: the humble Raspberry Pi 4. Fedora 40 …
BCM2712 (Raspberry Pi 5) is midway between Core i3-2350M (Sandy Bridge) and Core i3-8145U (Whiskey Lake) on Geekbench 6, single and multi-core, so a Fedora desktop should be fine on it (with 8 GB RAM).
On the other hand, BCM2711 (Raspberry Pi 4B) is about half of a Core i3-2350M (Sandy Bridge) on Geekbench 5, and so I doubt that it would provide a comfortable Fedora desktop experience (but OK on command line, like any toaster).
I have a 1U rack of Raspberry Pi's, and hosting outfits like Mythic Beasts are renting them out to people from their datacentre.
My Pis run:
- HomeAssistantOS
- Ubuntu
- Rasbian
and the others have chopped and changed between those and other OS.
And the reason for that is - HomeAssistant *hates* any other OS. There was a piece of software I wanted which basically demanded Raspbian (FlightAware - yes, I *HAVE* run it on other OS but if you want MLAT and other features and don't want it to interfere with other RTL-SDR tools, you basically have to use Raspbian and the official packages). I prefer Ubuntu and was annoyed with the Raspbian-only software.
I'd rather have a small 1U rack with 5 Pi's in it, each with a different "single" purpose OS, than try to shoehorn lots of third-party software into a single generic distribution so I can use them. It's actually MORE effort to maintain all the different software than it is to maintain 5 different OS (especially as most of them manage their own software updates and/or still just let you apt-get update).
So I can quite someone installing Fedora on one. Why not? If you're used to Fedora / Redhat from years gone by, that seems sensible and it's the kind of thing I'd do.
By preference, I run Ubuntu LTS on a Pi. Which is both a) vast overkill, b) homogenous with my other (more serious) servers, c) just fine.
FYI, my Pis run my entire house and are backed by both UPS and solar. Their power usage is so low, they and all my network cab (including Unify switch, Draytek router, NAS, PoE, CCTV cameras, etc.), even running through a UPS and ATS, pull ~100W total, which is entirely covered by the solar over the course of the day - so effectively free.
The question is not "Who runs Fedora", it's "If all you need is a basic powerful standard Linux machine, who wouldn't want to run it on a Pi instead?".
My entire house IT runs on less power than my laptop.
Or...
Run proxmox on an Intel NUC based device
(I use an older NUC7PJYH )
Then you can run up as many VMS as you have ram/storage.
Cheaper than buying multiple pis and associated accessories.
OS choice becomes a non-issue.
Potentially fewer headaches when migrating to better hardware.
Need more CPU? Assign more cores or migrate to a beefier host.
Actual SSD speeds without extra accessories, no worrying about write cycles on your SD card
Backups can be just snapshots.
There are things pis are good for, but if you are using multiple in the same place, you are probably better off with one + a load of VMS.
Nope. VMs have all kinds of problems not least the "all your eggs in one basket" problem there. Oh, you buy another to make it redundant? Now you have two pieces of hardware with one doing nothing all the time.
Additionally, when you're doing things like RTL-SDR radio analysis, or telephony you don't want to be in a VM. At all. Ever. Especially when the bunch of antenna coming off those multiple RTL-SDR gives you so many cables that you'd need a very high-powered and VERY good quality (for bus speed) USB3 hub to power and manage them all (and I bet even then the RTL will drop signal because of the USB bandwidth limits internally on the hubs).
Don't know if you notice but even "pre-SSD speeds" are absolutely fine and ridiculous for some purposes. Really unnecessary. And the Pi5 has NVMe support (admittedly that stops at PCIe 3.0 x 1 but who is writing/reading 800MB/s constantly in a little home server and why?). Also, you're assuming I'm writing via SD card (which, again, still is perfectly adequate for the majority of purposes - as my Steam Deck will attest to!).
Every one of my Pis has a hat or two, every one of them is using all their USB ports, every one of them has networked and local SSD storage.
I'd list the number of services running off about £200 of Pis (including some older models) but it's as long as my arm and I'm not sure I could remember them all. Dual-DVB-t tuners, DVB-S tuners, tvHeadend, Plex, Flightaware, a ton of RTL-SDRs doing dump1090 and rtl_433 (in multiple modes on multiple frequencies) and other radio-tricks, home assistant including even-more-radios, matter and tapo integration, GPS and NTP, relays, GPIO, sensors, network services, VPN tunnels (with backups), SNMP and RS485 monitors, DNScrypt, Asterisk, etc.
They all check for each other over the network and use each other for redundant storage, services, backups. I wouldn't want, for example, my DNSCrypt to be a single physical machine as my entire network relies on it and I'd want to be able to have secondary DNS, DHCP, etc.
And most importantly - they're not end of support after only 3 years like the NUC7PJYH.
"Raspberry Pi 4 Model B will remain in production until at least January 2034."
"Raspberry Pi 5 will remain in production until at least January 2036"
And they have a standardised form-factor and GPIO layout built into them. One fails? I can just buy the next model up and pretty much it'll "just work" the same with the same hardware connected.
There are things Pis are good for... and that's anything in which you need hardware integration and not "just a PC". For a start, 5 of various models will fit inside 1U (which this doesn't) with the cheapest and simplest front-rack design, and you can easily fit 20-30 inside better designed industrial 1U cases.
Which matters because if I wanted an off-site server colo'd... that would be the way I'd do it. One fails? Kick in the next without having to change hardware.
This post has been deleted by its author
For my HA-and-media stuff use kubernetes*, an ancient nuc, and a small cluster of pis for peripheral thingies, but I can respect the effort that went into your setup. The great thing about solutions is that there are so many of them.
*k4s specifically. I wanted a project to familiarise myself with it for work-related reasons, and the requirements of my home setup were just bizarre enough to touch nearly every part of the docs. I'm probably replacing it with portainer soon. It's simpler. Relatively.
Obviously not something one is going to run Linux on, but rather surprising that it wasn't 'just' a core and memory change. That's a nasty bug and it looks like it's new silicon time...
Certainly it's killed an idea I had to replace an RP2040 until this is sorted; I had no inkling that this issue existed so thanks for pointing it out.
> Certainly it's killed an idea I had to replace an RP2040 until this is sorted
It is a tricky situation; if you have full control over the design the chip (or breakout, like the Pico 2) is in, then it is easy to get around - use external resistors. A pain, and not good for the BOM if you are using it in production[1].
But if you are in the situation of "just plugging together third-party boards" and running their software, to replace an RP2040 you have to try and figure out if anyone is relying on the internal pull-downs...
Of course I'd like to see a patched respin, but from the POV of (at this point in life) just being a hobbyist, I'm keeping my fingers crossed for a few cheap RP2350s being dumped as "bad" 'cos there is still lots of goodness to be wrung out of them.
[1] although it seems a bit early to use it in production; R&D yes, breakouts for retail yes, but full industrial production - nope.
> use external resistors
It is not clear that the solution is always that simple. Some users are reporting the issue as worse than has been officially acknowledged and has far greater impact than first believed.
Raspberry pi have gone into denial mode, entered the spin zone, are claiming there is nothing more to it than the errata states. They have started locking threads on their forum where users have been discussing the issue.
It is not raspberry pi's first rodeo when it comes to getting things wrong.
Perhaps we will again see Liz being wheeled out into the sunlight to claim it is another pile-on by concern trolls instigated by vegan activists?
> Some users are reporting the issue as worse than has been officially acknowledged and has far greater impact than first believed.
Have you got any solid URLs to share?
I've seen chatter, incuding some doomsayers, but nothing reliably techie (comparable to the original report from Ian Lesnet).
> It is not raspberry pi's first rodeo when it comes to getting things wrong
Sadly, definitely seen "reports" and "discussions" that take that premise and then catastrophise, which makes it harder to find out the realistic state of play.
But, so far, my sole RP2350 has been happily working as a direct replacement for an RP2040 when plugged into a happy project, except that it provides more memory resources and some more general oomph. So if you *are* chucking away a cart load of dev boards then I'll still take 'em of your hands,
One thread was locked because it was getting snippy. Two or maybe three others are left open for comments.
Pi just released an updated errata, so not in denial mode either. Simply doing the appropriate investigating before commenting further.
But hey, I guess the actual facts ruin your agenda.
Obviously not something one is going to run Linux on
Jesse Taube Gets Linux Up and Running on the Raspberry Pi RP2350's Hazard3 RISC-V Cores
Though I doubt it's practical or useful to do so.
And the boot fails?
Ok, so I still recall having to set date & time when booting the IBM PC. Every. Single. Boot!
But I've never really considered the RTC in a PC (or other box[1]) to be inherently reliable, so time() is not something to be trusted until you've sync'ed with Rugby MSF, GPS, NTP (in chronological order of building it into systems). And that ignores all the embedded devices whose MCUs are big enough now to run Linux but where a battery-backed RTC is still quite an addition to the BOM[2]
[1] happy (?!) days watching embedded devices on an isolated LAN trying to agree whose RTC had drifted the least, of the ones that had actually decided to work today. Although that last was down to (elided to avoid telling tales out of class)
[2] like a Pi
So this whole idea just makes me nervous. Can see why they've ended up in this situation, but still...
"and if it's before the time that some Fedora packages were certified with GPG, the setup process fails."
Would seem to me that it's a dumb OS that performs a boot-time check of package signatures that stops things working, and doesn't just defer it until it's clock is reliable.
Imagine that on a server that loses Internet at the wrong time - one remote reboot, it doesn't pick up the time immediately / quick enough, and the whole thing just stops? Ridiculous.
That's a Fedora bug, not a RPI bug.
"Imagine that on a server that loses Internet at the wrong time - one remote reboot, it doesn't pick up the time immediately / quick enough, and the whole thing just stops? Ridiculous."
Except that server will have an RTC, which even if it's not entirely accurate, means the time will be later than when it last booted, not earlier. The Pi usually saves the time when it shuts down, meaning that the clock should always be wrong at boot but at least not report that time went backwards, but doesn't always. If I install something and lose power, its clock will be earlier than the installation time. Verification of packages is a deliberate decision, and verifying times when doing so is a very normal part of that process. Disable it if you like, but don't pretend that it's always a bad idea.
Likely the same thing that the Pi would do a lot of the time: use the wrong, cached time, which is still later, and boot successfully.
But yes, one of the tradeoffs of the strict verification is that, if there is no RTC, the power is lost so the cached time is out of date, and packages were verified during the running session, then they will not verify during boot. Those prerequisites are pretty rare with a server that generally has an RTC that works most of the time and generally isn't losing power unexpectedly given that they often have UPSes and automatic shutdown scripts if power is going to go down. That's less common with a Raspberry Pi which doesn't have an RTC at all and is most often connected to a less reliable power source with no prediction if it is going to fail. With every program, you end up building in assumptions and tradeoffs.
For instance, the same RTC lack can mess with programs that start before a Pi gets a network connection. It's well-known as something you have to consider. Some things that use wall clock time may report odd behavior when it switches from saved, wrong time to real time. Results get reported as applying to huge time periods. Scheduled tasks end up running because the time they were supposed to run was in the middle of that period that really doesn't exist. For better or worse, a lot of programs that are running as normal userspace programs have decided that they will have a clock available and if the clock acts oddly such as skipping entire hours, that's the user's problem. Not every program needs to write special cases for that not being the case, especially since most of them will be unable to tell between a clock acting oddly because the hardware makes it necessary and a clock acting oddly because there's an actual bug or hardware failure causing it. The user on a server will deal with this by a) disabling the verification so boot completes, b) adjusting the clock before the operating system is booted, c) replacing the faulty RTC battery, or d) preemptively doing A because they've decided they don't care. That doesn't mean that it should be preemptively disabled for everyone because some people may choose to do it for themselves. One of the typical attitudes of the Linux community is that you should be trusted to know how something will affect you if you change it and to let you get on with it. Defaults are defaults for a reason, but you're free to change them as you wish.
In RaspberryPiOS (or whatever it's called today) the Pis I run seem to write the current time to disc every now and then. This value is loaded at boot precisely to avoid the 01/01/1970 reset-type of problem. If the clock cannot be updated for whatever reason (lack of internet mostly) then the thing carries on believing it to be anything from a few seconds (after a simple reboot) to a few weeks (after being stuck in a drawer for a bit) behind real time.
M.
Would ultimately cause the same problem depending on when certificates expire.
Well I suppose so, yes, but this is likely to be of the order of many years for a never-connected device, and as you would probably want to connect the thing sometimes for simple things like updating it, the clock will periodically be corrected.
The system guards against a system update reading an obviously invalid time and I (as a complete amateur) reckon it should be trivial to update Fedora to do something similar. It's not a bug in the Pi 4 (no Pi has ever had an RTC as standard - the Pi 5 has the chip, but the required battery is not supplied), it's an unfounded assumption by the OS writers that the hardware will always have a near-correct real time clock. Presumably the same happens on commodity x86 hardware when the CMOS battery dies.
M.
You can get RTC modules for RPis very cheaply. I've had to install them in mine, to make running Ubuntu tolerable.
The intolerable thing is related to log files. Yes, there is a module that'll save current system time to disk on shutdown, and use that to set the system clock ASAP at boot up, but SystemD's way of logging doesn't work well with this.
For some reason, SystemD records the system time for each log entry made and uses that time stamp to sort log entries when using journalctl. That's fine, when you have an RTC and thence reasonably accurate system time from power-up. When you don't, a lot of boot time logging is timestamped with whatever the random system time is at power up. Then you get the cached time module moving the system clock, so log entries take a big deltaT. Then you get network time, and that's another deltaT.
The result is that diagnosing boot issues through log entries is a total frigging nightmare, because journalctl is presenting them in sorted time order, but all the ones you want to look at as a single "what happened during this boot attempt" block end up being scattered around in journalctl's output. It's pretty common to see log entries corresponding to the start of your last boot being displayed by journalctl as occuring before the shutdown that had preceded it. It's laughable.
I've no idea why journalctl timesorts log data that - by its very definition - is created and stored in time order. It's slow and ridiculous, and another example of where the SystemD project has gone way off the rails. Ok, so me bitching about the impact on a home RPi is one thing, but it's a nasty, nasty trap for someone with boot problems on a server that's also found to have a dead RTC battery. Journalctl does have a -b option to limit displayed data to the last boot, but there's no guaratee that the logged events are shown in the order they actually occured in (so far as I can tell).
So... basically we are saying that the drivers in the Linux Kernel don't work too well.
The lack RTC is just a bug in Fedora. When computers start up they can safely assume it's the same time that they last shut down, unless or until a more reliable time source is available...
io_uring
is getting more capable, and PREEMPT_RT is going mainstream