I suspect when the author said "we don't own any HD displays", they meant "we don't own any UHD displays"...
Although it's entirely possible they don't own a TV?
Raspberry Pi Ltd has shipped two updates for its single-board computers: a very small refresh to Pi OS 6, and a more substantial upgrade to the tool that writes your Pi's operating system to an SD card. The Raspberry Pi Imager 2.0 is a significant new version of the best and easiest tool for creating boot media for any …
> Although it's entirely possible they don't own a TV?
My main TV was left in my house by the previous owner. It is from 2007, is not smart in any way -- which is just how I like it -- and it is not connected to an aerial. I think it is a 720P resolution (might be 1080P) on a 50" gas plasma display.
So it's something around 40 pixels per inch, +/- 10dpi, which is quite a lot _lower_ than VGA resolution. That is not HD by my definition, but as you can tell, I am not into TV.
I do not use it as a monitor. I sit about 2 or 3 metres away.
Yup, although modern LCDs utilising all the latest tricks are now at or beyond that level of intensity, depending on how much you're willing to spend, though even entry level displays are now taking advantage of the trickle-down effect of formerly high-end tech. Our current LCD generates images that I'd say are on a par with our old plasma in this regard, whereas the first LCD we bought to replace that old monster was definitely lacking in the intensity/punchiness of the images it could generate, though it more than made up for that with all of the extra clarity we gained in the switch from 720P to 2160P...
Only in the warmer months of the year - once the temperature started dropping, our old plasma TV took on a dual role both as a display device and as a radiator, helping to reduce the load on the actual heating system for that part of the house... And in comparison to the alternatives that existed at the time for achieving big screen output, they really weren't that bad - it's only when you stack them up against a modern energy-sipping display optimised to within a gnat's doo-dah to comply with the latest energy saving standards, that plasmas seem like power-gobbling monsters.
"50" gas plasma display."
Random memory.
My down stairs neighbor mounted a 50" plasma to the ceiling above his bed. That lasted a week before he took it down. When I asked why he did that, he said the TV position was great, but he laid awake at night continuously thinking "Don't fall on me. Don't fall on me..."
"which is quite a lot _lower_ than VGA resolution"
It's quite a bit lower than the sort of *DPI values* you might expect for a VGA frame rendered on the sorts of screen sizes more commonly associated with display of VGA content, but that's not a like for like comparison - if you were to take a VGA frame and pipe it through the VGA connector that's likely to exist on the backside of said plasma display, then it ought to be fairly immediately obvious to you how much less detail can be resolved within that frame compared with a 720 (let alone a 1080) frame on the same display.
Similarly, if you were to pipe your 720/1080 frame into a HD-capable display of the same physical size as your typical VGA display, then the DPI value would be correspondingly higher than for VGA, and it'd now be obvious you were looking at a clearer, sharper, more detailed image than if you were feeding that display with a VGA signal.
So as far as determining whether something is SD, HD, UHD etc, you should be looking solely at the dimensions of each frame in terms of horizontal and vertical pixel count, and ignoring how big or small each pixel is once rendered on whatever size display that frame has been sent to. In that context, your TV *is* therefore HD.
At this point you're into the realm of diminishing returns and the fundamental resolving power of the Mk.1 eyeball though, so less surprising you can't discern any difference. I'd be rather more surprised if you still made the same statement if one of those screens was being fed from a VGA source, whilst the other was being fed from a 720P (or even just a SVGA) source.
"My main TV ... is from 2007, is not smart in any way -- which is just how I like it -- and it is not connected to an aerial. I think it is a 720P resolution (might be 1080P) on a 50" gas plasma display."
Your description sounds like my TV, except that mine is 60" (so inadequate resolution would be slightly more noticeable, not a boast). I had to call tech support, and convince them I was not interested in complaining, to learn that it is actually 720P. The manufacturer claims to be upscaling input to 1080p, and list it as such publicly. I have no idea what the true resolution is, but I choose 1080i when offered a list of resolutions, and it looks great.
A few years ago, I visited a friend for a few days - he had just bought a large 4K 60fps TV, and it really looked very impressive when offered high resolution content. When I came home, I was worried that turning on my old TV with its meager resolution would be a distinct disappointment.
But I was surprised and pleased to find that my old TV is still a great TV, numbers be damned. I have no plans to "upgrade".
It does run quite warm, tho.
"We have read that if you invest in a gold and silver-plated premium HDMI cable to go with your 600 Hz specialist eSports monitor, the colors look richer and more vibrant.".
True, but the insulation color makes a huge difference. Blue cables tend to warm up the image slightly, while the brown or black cladding favors darker colors. Also, you need to consider the impact of the insulation on the audio quality as well. I'm not sure why plaid insulation isn't offered to nullify some of these effects, but I bet someone out there will eventually wise up to the revenue-generating potential of patterned cable cladding for the most discriminating users. I could go into the incredible enhancement to be gained from a 1mm thicker audio cable as well, but I'll spare the reader.
Worth noting that while a few years ago it was practical to image with dd, these days if you do that you'll wind up with a machine with no SSH access, making headless installations... problematic. This is the main reason I've had to suck it up and abandon the terminal for the imager.
> a few years ago it was practical to image with dd
I still do sometimes.
I plug it into a screen and Ethernet, set my own password, and configure it that way. This is, IMHO, the _opposite_ of rocket science when it comes to setting up a computer. It's the low-tech way.
Your point strikes me as "I need to use one hi-tech tool because the other hi-tech method doesn't work".
Maybe I am missing something...?
On Trixie and Bookworm to make a headless image this works for me:
sudo dd if=2024-03-15-raspios-bookworm-armhf.img of=/dev/sdb bs=4096 status=progress
(usb out and in again to mount new partitions)
# set up for ssh access and first account
cd /media/roger/bootfs/
touch ssh
sudo echo 'pi:'`echo 'raspberry' | openssl passwd -6 -stdin` > userconf.txt
This post has been deleted by its author
I have spent a few hours with old and new versions of imager. The old one has a VERY ANNOYING nag screen multiple times to the new version. I used the old one a lot, mainly to the ease of setting up headless systems.
The new version seemed to change things for the sake of change. But ..
I came across this, that seems to answer all my prayers. https://github.com/gitbls/sdm/tree/master
Sadly I find it hard to find my way round new features
I am admittedly old fashioned but I still prefer to install an OS rather than image it.
That is, I want to interact with an installation program that lets me partition disks the way I need, select which packages to put on, which services to enable, and so on. Rather than (somewhat blindly) accept someone else's notion of those things in a pre-curated image.
Even better, I'd like to PXE boot said installation program, and ideally have some kind of scripted install method to go along with it, such that the routine is mostly hands-off.
In the past I've pulled apart those pre-made images and modified or re-assembled them to fit my purposes. It's manageable but it feels clunky, and I'm not very interested in maintaining a "library" of custom images which need to be re-done when a new release happens or I need to modify something.
So while I like my rpi4b well enough, I don't imagine I'll be getting a fleet of them. As Liam described in other writings, the situation with every SoC essentially being a new porting effort doesn't make it easy for system developer folk, and so it doesn't make a good match for the way I prefer to run the environment here. Somewhat disappointing, as I have no strict requirement for x86.
If I'm setting up a server of some sort on a Pi I just mount an external USB drive on /srv. If, the standard server installation puts the data somewhere in /var I move it onto the mounted drive and symlink it back. If it were a situation with significant data in /home that would also be on an external drive but I don't really use a Pi like that. If the SD card dies it doesn't really matter, the important stuff is safe.
Sods law: In the middle of syncing the data back from NextCloud onto my reserve laptop the 9-year old hard drive died - SD card fine. Time lost rather than data, fortunately and the replacement is Pi 5 with with 2 drives mirrored with LVM.
> I am admittedly old fashioned but I still prefer to install an OS rather than image it.
I can relate.
As I have said before -- take a look at Alpine Linux, then. You can do this with the Pi version. :-)
> the situation with every SoC essentially being a new porting effort doesn't make it easy for system developer folk,
There is an outlying possibility here that doesn't get much airtime.
Windows ties itself closely to the kit. Has since Win95, and got worse since XP with the copy-protection measures (WGA, etc.) This is a big problem on servers: even if you have lots of good full backups, you can't restore them onto anything except _identical_ kit or it won't work -- either work right, or work at all.
What many in the industry started doing about 15+ years ago was install the free VMware ESX server. In a VM, the hardware is always identical. Install into a single VM, no resource sharing, one server on one box. Then back up the VM. You can now restore onto anything.
It is possible that the Arm manufacturers _could_ come up with something like this.
Remember that until it had no choice in the matter, Intel had zero interest in making the computers built around its chips compatible. That was the PC manufacturer's issue, which they solved by cloning IBM kit.
It took over 20 years for UEFI to come along and move those goalposts, after IBM had left the industry.
Arm doesn't care. It's not interested.
Raspberry Pi makes £30 disposable computers for kids. It is not going to affect Qualcomm or Annapurna's Graviton chips or anything.
But maybe UEFI or something like it _could_.
I am currently looking into one chink in the wall on this matter.
> It took over 20 years for UEFI to come along and move those goalposts, after IBM had left the industry.
>
> Arm doesn't care. It's not interested.
>
> Raspberry Pi makes £30 disposable computers for kids. It is not going to affect Qualcomm or Annapurna's Graviton chips or anything.
>
> But maybe UEFI or something like it _could_.
It's not clear to me exactly what you meant above.
Arm-based servers do come with UEFI these days (it's defined as part of whatever Arm call their SBSA these days).
As for RPIs, I saw an article a month or so ago (an interview with someone from RPI Foundation) which rumoured that the next RPI (Pi 6) may come with UEFI as standard.
I dunno if that's better or worse. :)
In my (admittedly minor, i.e. 1 rpi4b) ARM faffing about, I'm not particularly enamored of u-boot. Plopping down a UEFI firmware bundle onto it did improve the setup I'm using (NetBSD aarch64 fyi) but it didn't really make me happy -- UEFI on x86 kit has always irritated me, this might just be a holdover.
It's been downhill since OFW if you ask me. ;-)
This post has been deleted by its author
:-) You're welcome.
> The core architecture is pretty elegant.
Agreed.
> The Acme editor/environment is a complete mind fuck though.
Strongly agreed.
The snag in 2025 is that Dis is 32-bit only.
However there is an effort to fix that, with an apt name:
https://github.com/9mirrors/purgatorio
The article mentions Balena Etcher to create bootable USB sticks. As the developers of Tails say:
"However, in 2024, the situation changed: balenaEtcher started sharing the file name of the image and the model of the USB stick with the Balena company and possibly with third parties."
Rufus is a better option if you are privacy-minded.
See https://tails.net/news/rufus/ for details.
> "However, in 2024, the situation changed: balenaEtcher started sharing the file name of the image and the model of the USB stick with the Balena company and possibly with third parties."
Sad news.
You know what: I know nothing at all about Balena except for Etcher, but one of their techies did a talk at Open Source Summit in Bilbao a few years ago. He showed how OStree works under the surface, which is terrifying, but the subject was, if I remember correctly, emulating a Raspberry Pi under QEMU for development purposes.
This is non-trivial but exact compatibility with the firmware doesn't really matter at Linux level.
The thing is, his setup instructions included enabling KVM. I went and asked at the end what the hell KVM had to do with emulating an Arm on x86, as I saw no connection.
He airily dismissed it: he said I was probably right, he just threw this stuff together based on lots of Googling and it seemed to work. He didn't seem to really understand the difference between machine code for different processors, or between emulation and virtualisation, or any of that boring techie stuff.
For me, a disturbing insight into the modern dev mindset.
> Rufus is a better option
No blasted use if you don't run Windows, though.
Also, while I find Rufus great for making bootable Windows keys without Ventoy, it's very slow in my experience.
Another option I've found myself using more and more is USB image tool.
It's only for Windows, but has the capability to not only write images to cards/sticks, but also to read the device back to an image (ie make a full backup that you can just reflash and restore).
Last time I checked, Balena Etcher couldn't do that, and as Win32 Disk Imager (which can also do backups) has decided it doesn't want to work on my machine for some reason it was nice to find it.
Very handy for periodically backing up SD cards from my many and various Pi's for example, as I've just done as part of a mass Trixie update.
No affiliation or anything with the software or author, but as other options were being shared...
But what is the barrier preventing? Maybe it needs to be there, especially for beginners.
Sudo, by removing the need to know a second, root, password is already a design pattern that breaks this principle:
Separation of privilege: Where feasible, a protection mechanism that requires two keys to unlock it is more robust and flexible than one that allows access to the presenter of only a single key. (https://en.wikipedia.org/wiki/The_Protection_of_Information_in_Computer_Systems )
The original paper was dated 1975. Surely this stuff should have been taken on board by now.
"Surely this stuff should have been taken on board by now."
It has been, at least by the technically inclined. Remember when Sun Microsystems started shipping computers without a factory set root password? The installer (human) had to create one, during the first boot ... and with the same procedural change, you were forced[0] to create a user account in order to finish installing the OS. That was in the late 1980s ... and I remember engineers with 10 or 15 years of industry experience bitching about it because supposedly it made their life difficult. The mind still boggles ...
IOW "it's too hard, so I won't learn it! ::stamps foot::". I've been seeing this for decades. Sometimes in this very forum. So have you.
MeDearOldMum didn't even know her computer had a root account for the first dozen or so years that she ran Slackware. As my dad put it, "what she don't know she can't hurt herself with". She has the root password now, just in case, but has never used it. And probably never will. No need.
She has also never had the need to run sudo.
[0] For small values of "forced". You could ^C out of the runfirst script after logging into the root account for the first time.
> I've been seeing this for decades
100% true.
Even when something is better -- cleaner, simpler, faster, whatever -- the old hands resist it.
This is _why Unix still exists_. The cleaner, simpler, faster version is Plan 9. And the smaller, easier successor to Plan 9 is Inferno.
But it's _different_ and the type of "expert user" who favours Vi won't try it.
If I want to edit a text file on home computer, on my Raspberry PI, or on my work server I use vim because it's there and it works and I know how to use it. If I want to edit files on an Amiga, a Spectrum Next, or on a Windows PC I also install vim or qe because I can and it works and I know how to use it.
I tried nano which is allegedly friendlier. The option to save is called WriteOut and the shortcut is CTRL-O. Faced with that kind nonsense I might as well go back to vim which also has nonsense shortcuts but the time spent committing them to memory was worthwhile because vim is everywhere and I can use it anywhere.
"But it's _different_ and the type of "expert user" who favours Vi won't try it."
Plan9 and its children are an interesting OS family ... I've been running it on one box or a dozen, and in one guise or another, since it was first made available. To date, I have found absolutely no use for it at all, except as a tool to learn about (and teach!) OS design, and as a curiosity. I used it as my main writing platform for about a year (coding, documentation, contracts, the books I'm writing, longer posts to ElReg, dead-tree letters, etc. ... ). Honestly, I gave it a good solid chance, but I'm back to vi on Slackware.
Plan9+kids is the poster child for a solution looking for a problem.
But I like the silly thing. I want to find a use for it. Maybe someday.
Hehe, seeing PiOS written as one word reminds me of PIOS, a name I've not heard in a long time…
(I always thought it was a really stupid idea to change the OS name from Raspbian - an easily searchable unique word, if you're looking for relevant information - to a new multi-word name, one part of which is OS as a separate word, which, given the quantity-over-quality crappiness of many search engines nowadays, is often likely to turn up all sorts of irrelevant pages in search results…)
Raspbian project name was "owned" by independent developers, Mike Thompaon and Peter Green as a port of debian.
Raspberry pi based theirs on it but eventually went with upstream debian so made their own name.
Agree Pi OS was a stupid name choice to replace it, as was calling their MCU "Pico" - also a common word which makes searchea less easy.
I'm still waiting for Ubuntu (from the official installer) to support the DVB-T hat (the official product) on the RPi 5 (official product) properly.
In other RPi OS, it "just works". On older Pi's it "just works". On Ubuntu (any version whatsoever in the last few years) with a Pi 5, the DVB-T driver just crashes and from that point on it can't tune or do anything and just spams dmesg logs with errors.
Everything I find tells me "Oh, it's fixed, oh no it's not" and basically just says don't use Ubuntu.
It's a kernel module driver problem, but Ubuntu can't be bothered to actually fix it with any official update even on the latest supported LTS versions, etc.
I just used the old imager to reload Ubuntu after it got fried by a reoccurring black screen of death problem that prevents the OS from booting. My theory is that this is caused by a security problem with the Pi 5, some people are skeptical of or even hostile to this idea. It will be great to have the opportunity to try out the new imager the next time my installation of Ubuntu fails to load.
Something well worth mentioning (perhaps even an article update) that addresses some of the comments is the latest Raspberry Pi OS release based on Debian Trixie which includes cloud-init ... Hurrah !!
As you know, cloud-init is a cross-platform, distribution-agnostic tool used to automatically configure systems on first boot. i.e you can provision your Raspberry Pi images with users, network settings, SSH keys, storage configurations without manually logging in after flashing the image.
https://cloudinit.readthedocs.io/en/latest/explanation/introduction.html
Kudos to Pi Engineering for also adding specific extensions for cloud-init configuration to allow enabling hardware interfaces (I2C, SPI, serial, and 1-Wire, and rpi-usb-gadget auto-magically).
Now time to test it ...