Why bother?
64 bit
Ubuntu 20.10 is out, with Canonical highlighting its Raspberry Pi support, including not only desktop support but also micro clouds based on MicroK8s, the company's lightweight Kubernetes distribution. Codenamed "Groovy Gorilla," Ubuntu 20.10 is only supported for nine months, unlike its predecessor 20.04 which is a long-term …
A minimal Ubuntu server (no GUI) would usually be more practical, in my terrible opinion. Just use ssh and sshfs from a normal workday desktop/laptop.
Even in special circumstances, e,g, in order to use vscode remote debugging, where the remote requires a GUI, it is possible to pipe X forwarding back from the app to the host through the ssh connection, so no desktop on the RPi is required.
The exception would be when a standalone RPi application need a GUI to interface with humans.
But THIS article is about the Rpi-specific GUI. I never denied that Ubuntu server was already available. As my comment states about the new GUI states -
"The exception would be when a standalone RPi application need a GUI to interface with humans."
But the article doesn't talk about such application at all. Instead it says "When it comes to the Pi, Canonical is likely most interested in the possibilities for distributed applications. A Pi can become a Kubernetes node thanks to MicroK8s, a minimal Kubernetes distribution".
There are two separate stories:
(1) Ubuntu GUI for stand alone applications on RPi. Great!
(2) A network of RPi enabling distributed applications, each RPi running Ubuntu server. As I mention, using X forwarding over ssh allows apps using X to display on a central remote host. Yes, there are other ways (e.g. remote desktop) but X forwarding is native to Ubuntu/Linux so why not use it?
Hey Canonical don't kill the messenger. You've got a good product, you just need to get your stories straight.
I have several programs which I test on ARM in both 32 and 64 bit. The 32 bit version runs on Raspbian, and the 64 bit version runs on Ubuntu Server 20.04. The programs do a lot of very CPU intensive number crunching. The tests include a complete set of benchmarks which test all aspects.
One C program runs 33% faster when compiled to 64 bit compared to 32 bit, while the other runs 63% faster in 64 bit compared to 32 bit. I do make extensive use of SIMD operations however, and 64 bit NEON SIMD is twice the width of 32 bit SIMD.
One Python program runs 25% faster when running on 64 Python as compared to 32 bit, and another runs 27% faster in 64 bit.
The hardware is the exact same Raspberry Pi 3 in each case, I just swap the card with the OS image.
Not every use case will be like this. However, if you want the best possible CPU performance, then there are very measurable performance advantages to 64 bit as compared to 32 bit.
Maybe someone can help me here as I can't tell where Snap "fits" and why or how apt-get install isn't functionally better at its level.
Say you want to install LibreOffice with Snap, O.K. that shouldn't be a problem considering you're never really going to need to dig around in its install directory. However, with anything that you might want to dig in, you hit a wall and have to go out and learn more about Snap.
Admittedly I haven't tried containers with Snap, but with Eclipse and Arduino, installing and modifying things manually wasn't so straight forward (I figured out Eclipse but gave up with Arduino plugins). I honestly can't see the appeal with Snap, at least not in presence of Docker and K8. As far as using "snap install microk8s" well, that seems like running a container in a container, which makes me wonder if what is coming next is running a Docker container with Snap to run K8...
In my limited experience Snap is fine as long as you don't have to do anything with the file system.
I installed KeePassXC as a Snap without realising, and found that it couldn't read my data file from Google Drive because of where it's cached in the local file system (in a hidden folder in my home directory). As far as I can tell the Snap developers consider this lack of access to be a feature and I couldn't find a way to allow the app to read the hidden directory, so I uninstalled the Snap and installed with apt.
https://ubuntu.com/blog/a-guide-to-snap-permissions-and-interfaces
It's sandbox stuff not file permission stuff (well technically some file permission stuff is used in the background).
From the cli you use the snap command to change them.
https://snapcraft.io/docs/interface-management
Yes there are permissions buttons, and I set them to allow read/write access to my home directory. But as far as Snap is concerned that doesn't include hidden directories inside my home directory. And as far as I'm concerned that is a step too far away from convenience and towards security.
Next time I suggest that you test your suggested solution before being so condescending about it.
Notably, it's useful if a thing you want to use requires (or has) dependencies incompatible with your existing installation, or some already present program; the snap thing, being more self-contained, might be a convenient solution.
Isn't that what containers are for? And you can actually access the filesystems of those.
Well yes, but sometime you don't want a full container just to run a program, with a different version of some libraries. These app bundles seem to be being more of a thing and they are ok for rapidly changing packages that are always out of date on the standard PPAs. I know people who would much rather build AppImages than Debs and they seem to be more portable.
Every snap I have seen however seems to be an unmitigated disaster. They break when they try to talk to external programs, they break when they try to access a file on an SD card. You look on a forum and technically this is not snap's fault, it is the devs, they needed to set some option do it with this command.
The thing is. Where does snap fit? AppImages are not managed, but they are simple, you download them and they run. Apt packages can have dependency hell, but if you get in installed it always seems to run. Snap seems to sit halfway between the two in philosophy, but a million miles away in execution. Harder for devs to make, less portable, and seem to always go wrong.
Why on earth did they try to invent the wheel. If they wanted a new distribution channel that was open and free of dependency hell, why did they not look at a way to curate and install AppImages? But no, they wanted to make a killer app that was owned by them. It may prove to be a killer app. It has killed my enthusiasm for Ubuntu. I am looking for a snap-free place to jump ship.
Ah, yes you touched on the base issue
"...that was owned by them..."
Canonical want to control your ecosystem
They want to control what you install,
feedback goes to them,
marketing comes from them.
Why are they still based on Debian ?
Ubuntu should be able to create their own complete OS from scratch
and base it on snap only, no apt, no dependency hell.
Lazy
According to snapcraft.io -- "Snaps work across Linux on any distribution or version. " So it's actually supposed to be more convenient for the application developers and OS developers who will no longer have to maintain/test per OS or per container versions of apt packages. Goes the theory.
However, currently I see (from inside Ubuntu) both apt and snap versions being maintained. Depending on the app, one or the other is usually a newer version.
Snaps really are the wrong answer to a question almost nobody was asking. I get what they _thought_ they were trying to achieve, you just bang this little container down in it's own walled garden and there's no way for it to make a mess of anything else that might be installed on your system. No dependency problems. No uninstall issues.
But outside utterly trivial cases, I never, ever want to do that on a linux box, and for the trivial cases... well... I don't _need_ any of that extra security. Anything I install I'm going to want to configure in some way, and now there's this tedious abstraction layer in between me and what ought to just be sitting in /etc/ or /var/lib
It sort of makes sense for desktop applications that are kinda fire-and-forget installs, but that's not really what I use linux for, and I the people who _do_ use linux like that just plain don't _care_ how their application got installed.
Maybe someone can help me here as I can't tell where Snap "fits" and why or how apt-get install isn't functionally better at its level.
Got the same feeling reading this...
I admit that I'm not a fan of snap; it's big, it's slow, it hinders tailoring, it duplicates in more than one way, and let's not start about its teaming up with that other great monolith, systemd. But I can also see that it might be heaven for some less sophisticated (cross over?) users. Which Canonical loves to target. Everything has it's use case eventually I suppose...
However, what I fail to see is why a large (I repeat LARGE, since duplication WILL happen with snaps. And what about the underlying snap code itself...), slow (boot, update...), limiting (see the example here on KeepassXC) is let loose on a Pi, which has limited resources/ capabilities to begin with? Whatever happened to the right application for the right situation? Sure the Pi is so resilient that it will run, which is cool, but is it really thoughtful use..?
The two big problems with Snap - for me - are the huge amount of disk space it uses and the need to update manually, there being as far as I can see no equivalent of apt-get update and apt-get upgrade. A further issues is lousy integration with Xubuntu, since only a few Snap applications offer to create /usr/share/applications/<name>.desktop to get the application into the menu system. And even when you do that manually, the next Snap package has a different name so you have to do it again.
sudo apt-get -purge snapd worked very nicely for me.
snap refresh
But you're not really meant to run it manually. It runs automatically when it wants to with no way to control it yourself (probably my biggest bugbear with snaps)
That assumes you installed from the snap store. If you side loaded a snap that you downloaded manually from some website then you will have to manually update that yourself.
The primary use case for Snap is for proprietary game publishers who want to distribute a single binary and not have to update or recompile it when the underlying OS changes significantly. Games on Linux is not a huge market, but they do exist and Snap was designed with heavy input from the publishers as to what they needed to make it economically worth while for them.
As I recall, Snap came from Ubuntu's mobile phone adventures. It was intended to be used for distributing fully-contained phone apps. Apps couldn't have package-interdepencies like rpm/dpkg. The blob is larger, but space is cheap(er), it can be updated more often and independently from the underlying operating system, and are better sandboxed/isolated. Ubuntu claimed some way to de-dupe shared components between snaps. The isolation makes desktop, theme, file system, etc integration painful edges at times. That's usually where people start cursing. Of course RedHat/Fedora has FlatPak, which makes very similar claims.
I am the less sophisticated Raspberry Pi user (non-programmer) who probably does not belong on this forum, but I learned lessons the hard way when installing Snapcraft.io applications on my Raspberry Pi 4 (4 GB) running Raspberry Pi OS. Mostly that installing Snapcraft.io applications on Raspberry Pi OS can break various functions of the Raspberry Pi OS itself, necessitating a re-install / re-image of the Raspberry Pi OS and my boot SD card. I have since learned to make SD backups so I don't have to start over from scratch when recovering, but was wondering if perhaps installing Ubuntu 20.10, now that is supported, would make Snapcraft.io applications work better? Does anyone have experiences to share with regards to this?