back to article Will Flatpak and Snap replace desktop Linux native apps?

I've been using desktop Linux since before many of you were born. Seriously. I first ran it when I downloaded the source code from Linux kernel developer Theodore Ts'o's MIT FTP server in 1991. So, when I say it's time to wave bye-bye to using package managers such as apt or dnf and replace them with containerized package …

  1. theOtherJT

    Performance isn't free...

    ...and while the overhead might seem trivial when you're running a single instance on a single laptop, it doesn't scale nicely. I hate to even imagine how much memory and CPU power we're wasting running this sort of thing in datacentres that might be host to literally millions of instances of SnapPackagedSuperFutureApplication compared to running it natively.

    And that's before you start worrying about the compound performance because SnapPackagedSuperFutureApplication is now running in a container because someone decided that would be cool - and of course that container is running in a VM because we don't do bare metal these days... all these layers of abstraction add up.

    It's a cute technology for the desktop, but it's really not viable everywhere. The concern a lot of us have is the same as we had with systemd - yes, it benefits desktop users in a whole bunch of ways, but it comes with unintended side effects once it proliferates enough to become "the standard" and now it's sitting on my servers getting in the way.

    1. John H Woods Silver badge

      Re: Performance isn't free...

      I have my suspicions about systemd, but no expertise: can you give a simple example of it getting in the way?

      1. Doctor Syntax Silver badge

        Re: Performance isn't free...

        I'd be more curious about ways in which it benefits desktop users.

        As far as I'm concerned systemd resembled Snap & the rest in that they attempt to solve non-existent problems.

        1. Kristian Walsh Silver badge

          Re: Performance isn't free...

          The problems Snap fixes are only ”non-existent” to you because of the countless unpaid hours of labour by package maintainers.

          Free as in speech, not as in -loading.

          1. mpi Silver badge

            Re: Performance isn't free...

            That labour doesn't go away because of any virtualization however.

            Snap and Flatpack will not displace traditional package managers, period, it's not going to happen. The reason is simple: These ecosystems are well established, work, there is tons of documentation around them, and all that isn't going away in a hurry. If a package maintainer decides that he no longer supports apt, the only thing that happens is that either someone else maintains it, or an alternative software picks up the mindshare.


            1. Anonymous Coward
              Anonymous Coward

              Flatpak doesn’t (yet) fix the biggest problem

              Show me the distro which uses the flathub runtimes entirely in lieu of distro-provided ones. Until then, Flatpak is wasting, not saving time at a resource cost greater than sandboxed .app bundles… which takes some to achieve!

            2. Mage

              Re: Performance isn't free...

              Agree totally, and some of us using UNIX before Linux existed. How long the author has used Linux doesn't make author an expert on Flatpack or Snap.

              I have serious concerns about Flatpack, and if there is a choice, I now install the deb version. Recently uninstalled all the flatpacks that had deb versions and replaced with deb.

          2. Doctor Syntax Silver badge

            Re: Performance isn't free...

            If something needs specific libraries or whatever, install it in /opt. I have Seamonkey, LibreOffice, VirtualBox, stuff from Brother, Informix and others all packaged that way. It was an accepted and successful way of doing things long before Snap & friends arrived.

            1. werdsmith Silver badge

              Re: Performance isn't free...

              Number of times I’ve read through getting started instructions and it suggests using snap and if you don’t have snap then download snap here….

              I just think fuck this, and look for another way. Screw all this clutter and fragmentation, don’t want it. Rather that bundle up tons of bits and pieces in a container, static link for fucks sake.

          3. Grogan Bronze badge

            Re: Performance isn't free...

            Funny how it's the most commercial distros that grouse about having to support applications.

            Community distros do that work just fine and you don't hear them complaining.

        2. Graham Cobb Silver badge

          Re: Performance isn't free...

          I don't often disagree with the good Doctor, and I dislike the way systemd imposes Lennart's particular preferences on users, but I think it is clear that there are ways in which it "benefits desktop users". The fact that it manages a complex startup dependency tree is a clear and significant benefit to desktop users compared with sysvinit, and it is fairly silly to pretend otherwise.

          Is that tradeoff worth the many downsides of systemd? Well that is much more of a personal opinion. So far I think the answer is yes as I have saved so much time from not having to worry about startup dependencies that I am willing to spend the time necessary to deal with the problems systemd creates (for example, the restrictions in their implementation of crypttab). However, I am beginning to be concerned by some of the more recent apparently mandatory policy decisions, some of which I may not be able to live with.

          1. unimaginative Bronze badge

            Re: Performance isn't free...

            The problem with systemd is that it is discussed as an init system.

            Its detractors will point out that systemd is designed to be much more than that. Its fans tend to deny this and say you can just use the components you want from the project.

            The best case I have heard made for System D (spacing an capitalistion because I just got annoyed by a rant about how you should not do so). is to argue for it as an additional layer for the operating system to provide a whole of extra functionality from a single project.

            systemd is a suite of basic building blocks for a Linux system.

            1. Grogan Bronze badge

              Re: Performance isn't free...

              Yeah, those brow beating systemd fanboys (that even pick fights over upper case D) with that same tired old argument about it being "more than an init system". Yeah, so what, syphilis is a lot more than a venereal disease and I don't want that either.

              It's a big, fat monolithic init system that does a metric shit tonne of unnecessary and unwanted other things on my system and you can't mask all of it out. I hate the journals and the way I have to fight for the socket to use another sysklogd program too (standalone I mean). What bollocks.

              Some people take more pride in their systems than that.

          2. Anonymous Coward
            Anonymous Coward

            Re: Performance isn't free...

            ... willing to spend the time necessary to deal with the problems systemd creates ...

            Not me or a huge amount of users who understood the POS it was from the start.

            ... beginning to be concerned by some of the more recent apparently mandatory policy decisions, some of which ...


            A bit late to the shit show, no?


            Before you know it you will looking at all, not some.


      2. Anonymous Coward
        Anonymous Coward

        Re: Performance isn't free...

        A real world example I witnessed more than once: a long (minutes) delay after issuing a shutdown/reboot, while systemd presumably tried to unwind some part of its dependency tree which had gone sideways.

        When I tell a system to shutdown, I expect it to go down -- not sit there spinning for (literally) minutes at a time. Worse, I couldn't even tell what it was actually hanging on -- the shutdown progress dialog list on console wasn't helpful, and the tools for triaging systemd's dependency tree are convoluted and frankly not very helpful. It's not like the system was in a bad way to begin with, either. No hung mounts, runaway procs, or other usual suspects.

        Perhaps in the systemd dev's haste to make startup faster, they did the opposite to shutdown. But shutdown is every bit as important.

        Thankfully I haven' seen that behavior again in a while, hopefully it's been fixed properly, by accident or otherwise.

        That's a "getting in the way" example. My bigger problems with systemd are in other categories like "feature creep", "KISS violation", and "unhelpful developers".

        1. AdaLoveseal

          Re: Performance isn't free...

          This. Exactly this. I've had servers taking 5 minutes to initiate a shutdown, and also servers actually *never* shutting down, having to be rebooted from the vCenter. It gets old real fast when you are trying to automatically reboot systems during off-hours to update kernels and the shutdown never completes. Add to this login timeouts on older versions of libnss-systemd, mount point supervision that doesn't detect network mounts going down... Systemd was utter sh!te in the days of Debian 8/9, it's getting a little better now, but still far from being as reliable as, say, runit. The shutdown bugs are mostly a thing of the past though, not really seen in the versions shipped with Debian 10+. The only good bit is unit files, and they're miles better than the old sysvinit. The file watchers are OK (not perfect, but OK). The unit override directory tree is just weird and confusing, and the private /tmp has caused me quite a lot of headaches, but overall creating systemd services is really painless, which is not always the case (though, well... runit).

        2. Anonymous Coward
          Anonymous Coward

          Re: Performance isn't free...

          I see this weekly with debian and systemd it could be boot or shutdown and it is random. It does not happen everyday and does not indicate the same service start or stop each time. This inspired me to find non-systemd distros. Too many apps seem to use or call to systemd making it hard to get away from.

          Like herpes it just keeps spreading.

          1. Orv Silver badge

            Re: Performance isn't free...

            I think the main difference is that systemd will actually wait for a buggy app to try to shut down; the timeouts are very long. This is different from SYSV or BSD init, which would just send SIGTERM followed by SIGKILL and hope for the best.

            The delays are irritating but in the long run it's probably better to fix the apps that don't shut down cleanly.

            1. Anonymous Coward
              Anonymous Coward

              Re: Performance isn't free...

              "The delays are irritating but in the long run it's probably better to fix the apps that don't shut down cleanly."

              Sounds like there is plenty of time to work on fixing apps while the serving is in a hung shutdown sometimes.

              Snark aside, I tend to think it's not really the apps at fault, here -- it's systemD. What app do you blame when a healthy running system doesn't shutdown while systemD is thrashing about?

              I grew up in the good-bad ole days when your SunOS IPX might hang on the way down due to unresponsive NFS server mounts or similar -- I get that. Send brk the serial console, pick up the pieces and carry on.

              This systemD thing was not that.

            2. Adrian 4

              Re: Performance isn't free...

              > I think the main difference is that systemd will actually wait for a buggy app

              A log entry will do that. It doesn't need an irritation every time the user shuts down. It's not Windows.

            3. fajensen

              Re: Performance isn't free...

              The delays are irritating but in the long run it's probably better to fix the apps that don't shut down cleanly.

              I dislike this idea of an "app", this being a defective and poorly crafted app (!) to boot, telling "§SYSTEM" how to run things!

              The Unix way is that we shoot it in the head.

          2. This post has been deleted by its author

          3. Anonymous Coward
            Anonymous Coward

            Re: Performance isn't free...

            Like herpes it just keeps spreading.

            Yes, just keeps on spreading.

            But it is much worse than herpes.

            Like some other 'tard has said before:


            Systemd is a virus, a cancer or whatever you want to call it. It is noxious stuff.

            It works just like the registry does in MS operating systems.

            It's a developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the host with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what is going on inside your box as it becomes more and more obscure.

            Systemd is nothing but a putsch to eventually generate and then force a convergence of Windows with or into Linux, which is obviously not good for Linux and if unchecked, will be Linux's undoing.

            There's nothing new going on here: it's nothing but the well known MSBrace at work.

            Now go and tell me that Microsoft has absolutely nothing to do with how systemd is crawling inside/infecting the Linux ecosystem.



            1. CRConrad

              Who was that?

              Yagotta link?

        3. fobobob

          Re: Performance isn't free...

          Nothing like needing to reboot a machine quickly and have to wait 1.5 minutes per filesystem for it to figure out there's nothing left to do (and with no easily accessible explanation of what exactly is causing it to wait). It's all configurable, and can be quite useful, but the defaults that seem to have been pushed out are not particularly great.

        4. Ganso

          Re: Performance isn't free...

          I spent some time exorcising MS shite from from my laptops, so for some months I distro-hopped a bit to see what would be better for a beginner. By that time, SystemD was already the default in most of the mainstream distros. I settled for an Arch variant, 1 of every 3 shutdows, damned SystemD would hang for several minutes, as you described. Went into forums and IRC, could not find the solution.

          Then I started reading about it, but for a neophyte, it was quite daunting to make sense of all the discussions and flame wars. At the end, I said "I am just a home user, I want my laptop to shut down when I tell it to shut down, screw this SystemD garbage" and that is it, been using a rolling release distro without that shite SW for years, and normally it is a smooth experience.

      3. vulture65537

        Re: Performance isn't free...

        Mark Bannister (of Jane Street) documented one on Linkedin a few years ago while I still had an account on it.

      4. bluebullet

        Re: Performance isn't free...

        Unix philosophy is to do one thing and do it well. Systemd tries to do everything counter to that thinking.

      5. bluebullet

        Re: Performance isn't free...

        Systemd killed my startup routines for date and sound for security reasons. No reason to do that.

      6. Anonymous Coward
        Anonymous Coward

        Re: Performance isn't free...

        ... can you give a simple example of it getting in the way?

        First time around here, eh?

        Oh, right ...

        I get it.


    2. Orv Silver badge

      Re: Performance isn't free...

      I hate to even imagine how much memory and CPU power we're wasting running this sort of thing in datacentres that might be host to literally millions of instances of SnapPackagedSuperFutureApplication compared to running it natively.

      They're already doing this, but with Docker, not Snap.

      There are tradeoffs here. It's much easier to manage applications and keep them secure if you isolate them from each other. That used to mean running each one on a separate machine, which was expensive and wasted resources, because those machines were idle most of the time. VMs were one solution to that; Docker is another. It's heavier than running all the apps together on one machine, but it's lighter-weight than running separate hardware, and preserves some isolation.

      1. Adrian 4

        Re: Performance isn't free...

        I have an operating system to support apps.

        I don't want to ship large parts of that with the app. At thatv rate I might as well just have a VM supervisor,. not an OS.

        It may be that the problem is apps rely on other apps instead of OS services, and they aren't packaged definitively enough.

        In that case, those apps need to migrate towards being reliable, properly supported and versioned services instead of half-assed web things.

        We don't want to end up with an OS that's maintained like python, do we ?

        1. Michael Wojcik Silver badge

          Re: Performance isn't free...

          Yeah. Duplicating OS userspace is inelegant and wasteful. It's a lazy solution to a problem that wouldn't exist if library maintainers were more careful about compatibility, and application developers didn't feel a need to 1) bring in every bit of functionality they've ever heard of, and 2) use libraries for every trivial thing.

      2. rcxb Silver badge

        Re: Performance isn't free...

        That used to mean running each one on a separate machine, which was expensive and wasted resources, because those machines were idle most of the time.

        No, most often it meant running each one under a separate user account on a machine.

        Of course a few apps which needed root access throughout wouldn't run that way. But usually just a change in port or changing permissions on files or devices was adequate.

  2. Anonymous Coward
    Anonymous Coward

    Snap and Hidden Files

    People assume Snap/Flatpak/etc-packaged apps are just "drop-in" replacements for existing apps.

    This is only the case where the app and package maintainer have a good understanding of what is going on and have made it work for you.

    What do I mean by this? Take hidden files - files prefixed with a dot/period - as a real-world example that has issues with Snap.

    Did you know that Snap apps can't see hidden files? When you execute "curl" on a fresh Ubuntu install you will find it is not installed by default anymore. However, the shell will suggest you can use snap and apt to install it. Given that the Snap version is often updated first, people will install the Snap option (to get the latest version).

    But... then it won't read your ~/.curlrc file.

    Full disclosure: I've not picked a great example with curl, as I'm not sure I've ever used ~/.curlrc, but the point is that ~/.blahrc files won't be read by Snap apps, and requires user/system config changes to let it (i.e. it is NOT a property on the Snap package in question that needs to be declared to let it read the .rc file, there is no feature like this).

    1. Ganso

      Re: Snap and Hidden Files

      Well, I did not know that... that is just silly.

  3. nematoad Silver badge
    Thumb Down

    Lucky you.

    "I run my Linux desktops on modern systems with powerful processors, 16GBs of RAM, and speedy SSDs. "

    I don't.

    My main box is an elderly Viglen desktop with 6GB of RAM and a couple of HDDs.

    It works for me and does all I need it to do in a fashion that I find acceptable.

    Are you suggesting that we all go the Win 11 route and splash out on the latest kit just so the the OS will boot?

    There are many reasons why people run Linux and the ability to run the OS on what others might regard as obsolete resource starved systems is one that I have seen mentioned many times on El Reg and other sites.

    Just because you have access to top of the line kit does not mean that everyone else has.

    There is an old saying "I'm alright Jack."

    1. Anonymous Coward
      Anonymous Coward

      Re: Lucky you.

      "Waahwaaah I like my ancient computer, things shouldn't progress for anyone because I wont buy a new one".

      I you want an old computer, you are free to use old software. Not everyone else's problem.

      1. big_D Silver badge

        Re: Lucky you.

        It is not just old computers.

        I have a bunch of Raspis doing various tasks, 2GB RAM, plus a Raspi 400 with 4GB RAM. Those are newish computers (all under 4 years old).

        At work, our standard PC is a Core i3 with 4GB RAM. Most laptops have 8GB, but a majority of the desktops are still on 4GB.

      2. mpi Silver badge

        Re: Lucky you.

        > I you want an old computer, you are free to use old software. Not everyone else's problem

        My workstation is powerful enough to run LLM training on, and I still don't want superfluous virtualization running on it.

        Why? Simple: My compute resources are for running things that have a right to need them, not for things that need them because they are buried under a mountain of cruft. It's the same reason why I don't run an electron app that gobbles up 2GiB of RAM while idly displaying a chat window or running a music player.

        If something eats a lot of resources for no good reason it's bad software, plain and simple. And I don't run bad software.

        1. Orv Silver badge

          Re: Lucky you.

          As usual there's a tradeoff between human effort and machine effort. If your use case is such that it's better for you to spend more human effort maintaining apps so that your computer expends less effort running them, that's fine -- you probably shouldn't use snap/flatpack/docker/etc. But a lot of people would rather let the computer do the work.

          1. mpi Silver badge

            Re: Lucky you.

            a) My computers resources don't exist to make the lives of package maintainers easier.

            b) Me using flatpack instead of installing a pkg via pacman doesn't save anyone work.

            Because traditional packages aren't going anywhere, period. The only result of Flatpack/snap becoming widespread, is that there will be two more package standards that projects will have to support. If they don't, then people will use alternatives that do support the package system they use.

            1. Orv Silver badge

              Re: Lucky you.

              I don't just mean maintainers. I'm not a maintainer but I've still spent a lot of time untangling Debian systems that were in dependency hell after a failed upgrade of something.

              1. mpi Silver badge

                Re: Lucky you.

                Yes, I have spent a lot of time untangling servers as well. Almost everyone who has SysAdmin work to do has.

                And everyone who thinks that this effort outhweighs the benefits, is free to use whatever new virtualization paradigm they think is most beneficial to their use case. Hell I use virtualization heavily, in the form of docker containers.

                And everyone who comes to the conclusion that the virtualization is detrimental to the use case is free to not use them.

                My point isn't that flatpack etc. are bad. My point is, they won't replace native pkgs. Not now, not ever, because compute resources matter, there are slimmer virtualizations, and having isolated file systems that are not as easily fixed as mounting a docker volume can suck easily as hard as the worst dependency bug after a blorked update.

                And let's be frank here, nowadays my goto fix if an update causes dependency problems usually isn't to fix the server. My fix is to simpyl spin up a new machine in minutes, and wipe the blorked server.

                And I'm pretty sure they won't replace them as the goto pkg solution in the desktop space either. Because most desktop users never get into dependency-hell situations to begin with. I daresay that 99% of all software that is used on linux desktops is whatever comes installed with the system, plus maybe another music player and alternative browser. And even going beyond that, most users simply click "install" in the GUI frontend of their pkg manager, and it just works, out of the box.

                Dependency-hell problems aren't something that the day-to-day normal user frequently encounters, if at all. The people it actually concerns, are power users and Sysadmins, who, incidentially, are also the people who know how to deal with it.

          2. Adrian 4

            Re: Lucky you.

            There's also a difference bewtween resources required once, for packaging, and resources required every boot or application start.

            The idea that you should repackage every day is also absurd, and if it is done it's done by machine as in a nightly or well-named unstable build

        2. Anonymous Coward
          Anonymous Coward

          Re: Lucky you.

          If something eats a lot of resources for no good reason it's bad software, plain and simple. And I don't run bad software.

          +1 ...

          I know it's not Friday yet, but have one on me. -> |""|D


        3. Ganso

          Re: Lucky you.

          Well said. Have a pint ->

      3. GioCiampa

        Re: Lucky you.

        re: "Waahwaaah..."

        I'll bet you also whine (anonymously, of course) that people use adblockers because you're too lazy to host the advertising yourself?

        After all, if you're happy using CPU and storage resources when you don't necessarily need to... you *should* be hosting advertising, email servers, etc... in fact, nothing at all that isn't locally hosted.

      4. unimaginative Bronze badge

        Re: Lucky you.

        Continually buying new stuff because software gets more bloated is also a fairly serious environmental issue.

      5. Michael Wojcik Silver badge

        Re: Lucky you.

        "Waahwaaah I want to waste resources for no good reason."

    2. big_D Silver badge

      Re: Lucky you.

      I have a lot of Raspberry Pis running Linux, most have 2GB RAM and a slow ARM processor...

      1. Greybearded old scrote

        Re: Lucky you.

        A computer running 4 cores at GHz speeds with RAM in the gigabytes is not slow. It's a sodding supercomputer that would have had a Cray 1 user going, "You what?!

        If it seems slow, the attitude of the author and AC above is why.

        1. karlkarl Silver badge

          Re: Lucky you.

          Its true. And it has enough power to be able to run ~20x home PCs from the late 90's.

          The mindsets of IT consumers today is just wrong frankly. They are effectively ruining it.

        2. Intractable Potsherd Silver badge

          Re: Lucky you.

          And the Atari St would have had an ENIAC user going "You what?!" I don't understand your point - fast yesterday is slow today.

          1. Greybearded old scrote
            Thumb Down

            Re: Lucky you.

            Fast yesterday should still be fast today, and new hardware should be even faster. Instead we pile on more and more bloat, wasting everything that was gained. My main PC used to be greased lightning. Now the current versions of the same software keep me waiting like its predecessor did.

            A new computer should only be needed if the old one wears out or I want it to do larger jobs.

          2. Ganso

            Re: Lucky you.

            Bill, is that you?

    3. pdh

      Re: Lucky you.

      YES! For many of us, one of the attractions of Linux is that it runs well on older machines where Windows won't run. I am typing this on a 2015 Lenovo W500 that I inherited when my wife tried to move from Windows 7 to Windows 10 but found it unusably slow. And this is not my oldest Linux laptop.

      1. Fred Dibnah

        Four Yorkshiremen

        W500? Pah. Mine’s a T400 running Mint 21 XFCE, plus a T60 as a backup :-)

      2. Anonymous Coward
        Anonymous Coward

        Re: Lucky you.

        ... not my oldest Linux laptop.

        Ha !!! -> Linux Devuan Beowulf / Openbox WM and a backported kernel on an Asus 1000HE (Atom 280/1.7GHz-2Gb RAM - 500Gb HDD).

        Take it with me whenever I am out of town, never let me down.


    4. Displacement Activity

      Re: Lucky you.

      There are many reasons why people run Linux and the ability to run the OS on what others might regard as obsolete resource starved systems is one that I have seen mentioned many times on El Reg and other sites.

      I run my apps on minimum possible/cheapest hardware on a VPS: 1 pretend CPU, 2 GB RAM, 8+ GB SSD (I can't get much to work on less than 2 GB RAM). This costs the end-user about $110/year. And, just for kicks, it also runs on RasPi. These servers have to run for 2+ years with only security updates. I suspect that this sort of system is going to way outnumber Linux desktop systems over the next few years.

      Bizarrely, the image I start with (a 'minimised' Ubuntu server) has snap pre-installed, which has to be removed. If I get an SSL certificate with the Certbot snap, for example, it costs me 500 MB of SSD. And, equally bizarrely, snap is considered to be more important than rsync, cron, ufw, etc, which aren't pre-installed.

  4. Anonymous Coward
    Anonymous Coward


    Whilst I can see the attraction whenever I install a snap app on Ubuntu it takes an age to start every time, and this is on a respectable AMD Ryzen 4700 system with SSD.

    Then some snap apps can only access home directory.

    Is Flatpak any better?

    1. Rikki Tikki

      Re: Slow

      "Is Flatpak any better?"

      Not in my experience. I removed the whole thing a couple of months back, saved 11 GB.

      As so often seems to happen, the end user is the least important person in design decisions - whether Windows or Linux.

    2. abend0c4

      Re: Slow

      There's are a lot of overlap between snap, flatpak and docker-style containers and they're basically trying to solve the same problem (though there's some dichotomy between GUI and server applications).

      I think there's some possibility of addressing all of them with the same technology and the same tooling, but I don't think the complexity and overhead of loopback-mounting an archive is it (though that's the way Unix goes: there's your solution and there's mine and neither is ideal and neither will compromise...) . You could possibly get further by using the actual file system and library names with suffixes based on a file checksum so that applications that shared the precise same version of a library could share it rather than duplicate it, but it would require infrastructure and, more importantly, build changes: it wouldn't just be packaging unless you could post-process the linker output.

      Not a fan of having to learn a new set of magic commands to ensure the application can access files or devices you already have access to ...

      1. Richard 12 Silver badge

        Re: Slow

        That's more or less how Microsoft (mostly) solved DLL Hell - the WinSxS subsystem uses application manifests to select the right set of common DLLs.

        ... and a huge database of known badly-behaved applications and heuristics, because it turns out a lot of developers ignore all the documentation.

  5. Steve Davies 3 Silver badge

    Using Snap comes at a cost

    The management of Snap that goes on in the background is not free... ie, it uses CPU cycles.

    This is the output of /var/log/messages that contain the word 'snap'. This is an Alma Linux system that runs a wordpress site. Snap is used to manage the 'certbot' key renewal process.

    Why does it need to check to often for updates?

    Seeing all this has made me very anti-SNAP.

    With previous versions of certbot, your ran a cron script every week. The key was renewed well before expiry. It worked and was very simple. Now we get this crap sandwich.

    Jun 8 12:18:02 TSE snapd[31695]: storehelpers.go:769: cannot refresh: snap has no updates available: "certbot", "core", "core20"

    Jun 8 12:18:02 TSE snapd[31695]: autorefresh.go:551: auto-refresh: all snaps are up-to-date

    Jun 8 20:03:03 TSE snapd[31695]: storehelpers.go:769: cannot refresh: snap has no updates available: "certbot", "core", "core20"

    Jun 8 20:03:03 TSE snapd[31695]: autorefresh.go:551: auto-refresh: all snaps are up-to-date

    Jun 8 21:28:02 TSE systemd[1]: Starting Service for snap application certbot.renew...

    Jun 8 21:28:03 TSE systemd[1]: snap.certbot.renew.service: Succeeded.

    Jun 8 21:28:03 TSE systemd[1]: Started Service for snap application certbot.renew.

    Jun 9 05:53:02 TSE snapd[31695]: storehelpers.go:769: cannot refresh: snap has no updates available: "certbot", "core", "core20"

    Jun 9 05:53:02 TSE snapd[31695]: autorefresh.go:551: auto-refresh: all snaps are up-to-date

    Jun 9 08:25:00 TSE systemd[1]: Starting Service for snap application certbot.renew...

    Jun 9 08:25:01 TSE systemd[1]: snap.certbot.renew.service: Succeeded.

    Jun 9 08:25:01 TSE systemd[1]: Started Service for snap application certbot.renew.

    Jun 9 08:33:02 TSE snapd[31695]: storehelpers.go:769: cannot refresh: snap has no updates available: "certbot", "core", "core20"

    Jun 9 08:33:02 TSE snapd[31695]: autorefresh.go:551: auto-refresh: all snaps are up-to-date

    Jun 9 11:53:34 TSE systemd[158479]: Listening on REST API socket for snapd user session agent.

    By contrast, this is the output for 'flatpack'

    Jun 7 08:30:19 TSE systemd[7834]: Starting flatpak document portal service...

    Jun 7 08:30:19 TSE systemd[7834]: Started flatpak document portal service.

    Those entries were made when the system rebooted after an update to the kernel.

    When I build the next iteration of this server I will do everything possible NOT to let SNAP anywhere near the system. It is 1,000,000,000 steps backward IMHO.

    The people responsible for this POS in Canonical can go suck on this --> [see icon]

    1. Howard Sway Silver badge

      Re: Using Snap comes at a cost

      Adding the entry "" to /etc/hosts tends to disable it all very effectively I've found, with no ill effects.

      The "for" argument in favour of snaps, etc is ease of use and dependency management.

      The "against" argument is bad performance and having to download and install half an OS of dependencies with each application.

      Considering the fact that SSDs are still quite expensive, and massive downloads are still not all that fast for me, filling them up with huge bloat to avoid having to type "apt install program_name" in a terminal is not attractive for me at all.

      1. HereIAmJH

        Re: Using Snap comes at a cost

        The "for" argument in favour of snaps, etc is ease of use and dependency management.

        If we really want to show our age, we could suggest going back to static linking applications instead of going through contortions to virtualize them. That would solve dependency issues (Linux version of DLL Hell) by linking all the dependencies into the binary. "Disk space is cheap...."

        We could also fondly remember LSB and dream of a day when Linux distro doesn't matter when it comes to choosing applications.

      2. Mike_R

        Re: Using Snap comes at a cost

        From experience on Ubuntu 20.04, to disable and/or purge snap Google (or DuckDuck..) "Linux nosnap"

        e.g. (works on Ubuntu, also)

        Getting rid of systemd is more complicated, you would probably have to switch distro (see mxlinux etc.)

    2. jackharrer

      Re: Using Snap comes at a cost

      It also takes a lot of space. I run small VMs for different purposes (general browsing, secure browsing, dev, etc).

      I noticed I started running low on HDD space on them. Culprit? SNAP. It was keeping old versions of libraries that were not even needed. I couldn't find a way to remove them. So I nuked whole Snap (which was pretty much only used by Firefox) and replaced with Firefox from launchpad. Space saving: 5GB.

      This is just ridiculous, considering that whole OS with LibreOffice and stuff is 11GB.

      Snap also creates tons of mountpoints (one per app) slowing down startup/shutdown considerably. Yes, it may not be visible on dev workstation, but is as hell on quad core VM.

    3. Wzrd1 Silver badge

      Re: Using Snap comes at a cost

      I've already had to uninstall containerized packages, as there were critical vulnerabilities, there was a delay for unknown reasons in an update for the containerized package and a patch was available for the offending software that was so thoughtfully containerized and hence, refractory to patching without expending more man hours than just compiling from source and installing it the old fashioned way.

      Add in, now I have to run a package manager on my test systems for updates, then snap, then whatever other joy of a containerized package system that the distro may thoughtfully include. Package managers were created to resolve dependency issues, which containerized package managers now are to resolve and I'm sure we'll add another 16 layers of work for system administrators to slave over.

      Because tossing out the baby with the soiled diaper is an option or something, call it optimization. I'll call it eventual extinction.

  6. b0llchit Silver badge

    One update?

    Ok, all the packages are snap/flat/this/that/docker/dumbler or whatever system. Now imagine that one (1) library needs an update. Lets say a library used quite a lot. For example the SSL/TLS library. Now you need to update N packages at multi-Mega-size instead of the one affected library.

    You can say, well, on my desktop, what is the problem. Yes, now imagine the server with 30+ instances.

    Then you can say, well, storage is cheap... bandwidth is cheap, etc. But remember: Good, Cheap, Fast, pick two, you cannot have all three.

    1. Degats

      Re: One update?

      Not to mention when said library has a critical must-patch-yesterday vulnerability, you're now reliant on *every single* application application developer that uses it to quickly update their package (assuming they're organised enough to even know it's been inherited 5 levels deep) rather than just 1 maintainer.

      1. bazza Silver badge

        Re: One update?

        A very small counter argument is that if the fix is a breaking change in the library, then all apps are going to have to be updated anyway. That's a rarity, but when it happens the snap approach isn't that bad. One could pick and choose which out of date apps to let run still.

        However I'd rather have the one-update-fixes-everything property of a shared library, because that is the more common scenario.

        Cripes, I find myself defending snap. Am off to pub for a corrective beer.

        1. why you delete my account?

          Re: One update?

          Problem is, you don't know what is or is not a "breaking change" in a library until you actually test it with your apps and your configuration.

          IMHO "we changed the library API" is actually the easy one to deal with, it's the "we fixed the implementation and it shouldn't break anything but..." that is the harder one. Do you test everything, or do you upgrade and pray it doesn't break anything important and that it's the more reasonable (and less senior) users who find the issues? And what do you do if you test and somehting doens't work with new library version - start on the highway to dependency hell custom configuring installations or library loads, or ditch the upgrade and leave every app with a security vulnerability for the sake of one that may not even be exposed?

          With snap / flatpack / whatever containerised installs you can legitimately hope that someone has tested the app with the library versions it ships with, and if there is one app that a library security fix breaks, then you can upgrade the others and maybe mitigate the threat to the problem one, without mucking around in dependency hell.

          Which is worth more to you - the management and testing time or the memory / disk space ? There won't be one right answer for everyone or every scenario, it's a trade off and a debate that will run and run, in fact to me it's the same static vs. dynamic linking debate that has run ever since Linux got shared libraries (the old jump-table ones pre-ELF), just that these days the memory and disk usage at stake is a 2-3 orders of magnitude more.

  7. Arthur the cat Silver badge

    It's not XOR

    Only developers build from source code these days. The vast majority of users use package managers.

    I'm rarely a dev these days, but build from source to create packages then use the package manager to install across my various machines depending on what they do. I do this because I want different compilation options from the standard. That's the great thing about FreeBSD, you can mix and match as you like without some self proclaimed expert telling you how you will do things their way.

  8. Anonymous Coward
    Anonymous Coward

    Just yet another layer of virtualization

    "Flatpak and its rivals can also run on any Linux distribution"

    So can Windows, any version, and all of its applications. What's the miracle in that? Just run a virtual machine with pre-installed pieces, because that's what it is.

    You could run flatpak et al on Windows too, they only need to adapt them to use Windows APIs.

    1. steelpillow Silver badge

      Re: Just yet another layer of virtualization

      "You could run flatpak et al on Windows too, they only need to adapt them to use Windows APIs."

      That would be very useful for the odd specialised Windows app that does not do a Linux version; save messing with WINE, which I have never fathomed how to configure/troubleshoot.

  9. Doctor Syntax Silver badge

    May I refer the honourable gentleman to the analysis of my LibreOffice installation in /opt

    This is an application for which the download site provides 1 DEB and 1 RPM archive* for amd64 Linux for each version**. The vast majority of the file it contains are what might be summed up as "resources" almost certainly cross-platform.

    There are 10 times as many html help files that would need translation for language as .so files that might conceivably need adapting to compile on a particular Linux distro.

    There are about half as many again XML & XSLT files as .so and .jar combined.

    Anyone who's been using Linux for such a long time must surely be familiar with the notion of installing applications along with a selection of libraries and resources in /opt. It's the problem that Snap, Flatpak & the rest of them set out to solve. It was solved long ago without the extra baggage that those bring along. In terms of baggage I find it particularly ironical that some time ago I decided to have a look at Flatpak and tried to install it on whatever vintage of Debian I was running at the time. It failed to install because some dependency wasn't satisfied by the Debian's version of some library, a notable failure of the KISS principle.

    * A tgz archive that expands to 42 individual .deb files, In addition there is language pack, a further archive bundling 3 .debs(including dicitonary and readme files) and a help pack containing a further .deb

    ** Currently 7.5.4 and 7.4.7. The two files per version is the same as are provided for Mac (Intel & Apple silicon) and for Windows (32 & 64 bit).

    1. Anonymous Coward
      Anonymous Coward

      ... a notable failure of the KISS principle.

      Indeed ...

      Seems the seeds sown by the systemd cancer/virus/knotweed have started to sprout evolved mutations.

      So ...

      Where's the fucking glyphosate when you really need it?


      1. CRConrad

        It's all that bitch...

        ... Rachel Carson's fault. Hope she dies of cancer or something!

        (Too soon?)

  10. Doctor Syntax Silver badge

    Straw man alert

    "You can still do it that way today, as in this example of how to install Node.js v8.1.1 to your Linux desktop." with a link to a build from git-hub repositories The alternative is simply:

    apt install nodejs

    1. Anonymous Coward
      Anonymous Coward

      Re: Straw man alert

      Am I missing something? Doesn't that install a package which is something someone has put together, which is the point of the article - one for apt, one for yum, one for etc etc ... ?

  11. mpi Silver badge

    "Actually, the better question is: When will they replace most desktop Linux programs?"

    No idea when, or if that will happen for anyone else.

    I know when it will happen for me:

    When I see a good reason for an extra layer of abstraction/virtualization between the application and my OS. So far, I haven't seen one.

  12. robinsonb5

    I can see some value in shipping large, monolithic and especially commercial software in Snap or Flatpak format, but I think of it as something you hold your nose and tolerate, an ugly compromise that should be a painful reminder that the compatibility problem turned out to be too hard to solve - it certainly shouldn't be a long-term ideal for software distribution.

    Snap in particular offends my sense of elegance and minimalism, with its littering of loopback mounts, and inability-by-default to access files in the home directory. (Yes, I know there's some magic incantation to make that possible; whatever it is failed to work for me when I tried to use OpenSCAD recently - I eventually gave up and used the AppImage instead.)

  13. Martin Gregorie

    One thing you've missed

    Is that at least some of us like to keep our backups in a known state. This is easy to do if you run a backup followed immediately by a system update: in my case this means running rsync followed immediately by dnf (I'm running Fedora) and a system restart to make sure that new libraries etc. are now in use. Handling backups this way ensures that, after a disaster such as a disk failure, you *KNOW* that simply restoring the most recent backup will leave you with a runable system in a known state. This is a problem regardless of whether the backup+upgrade sequence is manually sequenced or run by a script.

    The problem with flatpak and friends is that there's apparently no way to avoid getting force-fed a an update at a time chosen by the developer. Murphy pretty much guarantees that at some point an application push upgrade WILL coincide with a backup run and that this will result in a backup that, if restored, will contain incompatible software modules and/or configuration files.

    AFAIK there is currently no mechanism provided that can prevent developer-originated push backups from running when the recipient system is taking a backup. I've certainly not seen any mention of such a backup integrity protection feature, so it seems unlikely to have been provided.

    Another cause of problems would seem to be the case where the push update replaces application configuration files that have been modified to suit local requirements, e.g. sshd security settings. At least dnf, apt etc tell you when this happens and you can edit the revised configuration as required before rebooting the system: do flatpak etc even tell you that a new set of configuration files has replaced your customised ones?

    1. vulture65537

      Re: One thing you've missed

      Disconnect from network then do your volume or filesystem snapshots. Reconnect to network to copy them to other storage.

    2. Anonymous Coward
      Anonymous Coward


      temporarily blackhole the update servers so you can maintain the convenience of administering the process from your desk without having to plug in a keyboard.

    3. Graham Cobb Silver badge

      Re: One thing you've missed

      This is one reason for running a root filesystem that supports snapshots. Snapshot the system, then backup from that.

      1. Norman Nescio Silver badge

        Re: One thing you've missed

        Assuring data consistency in a snapshot is harder than it looks. It depends on your application programmers 'doing the right/write thing'.

        StackExchange: Is it overkill to shutdown a VM before taking an LVM snapshot for backup purposes?

        But what if an application or series of services is in the middle of an operation that consists of multiple independent transactions? For example, a user is registering. The database is updated but the registration e-mail has not been sent out yet or something like that. Taking a snapshot at this point would not reflect a correct/complete system state.

        Yes, programmers should assure consistency for a whole transaction (which can consist of many updates). In real life, that doesn't always happen. Snapshots are great, but they can't protect you from non-atomic transactions.

  14. bocan
    Thumb Down


    Pardon my language but there's a well known quote very appropriate for this question:

    "You there: F*ck off. And when you get there, f*ck off from there too. Then f*ck off some more. Keep f*cking off until you get back here. Then f*ck off again.”

    Redhat and Ubuntu can whip their dead horse "packs" and "snaps" till the end of time, but no self-respecting sysadmin's gonna allow those things onto a enterprise server. I've only ever seen it once on a corporate instance, but it was only allowed as no one would have to support it and no one cared if it stopped working. As for desktop, it feels largely irrelevant as Linux on the desktop has such low usage, but do people really want a system that doesn't work anything like a normal server? Half of the point of Linux on desktop is to give you a place to develop on a like-to-like system.

    1. Anonymous Coward
      Anonymous Coward

      @bocan - Re: No.

      Wrong! Windows self-respecting sysadmins will do just that. :)

      1. ChoHag Silver badge

        Re: @bocan - No.

        > Windows self-respecting sysadmins

        Surely you jest.

      2. Steve Davies 3 Silver badge

        Re: @bocan - No.

        Errr... find me a Windows sysadmin that commands respect! Wot? There are none? (only joking)

        1. CRConrad

          Re: @bocan - No.

          They do, they do... But as he said, only from themselves.

      3. CRConrad

        Re: “Windows self-respecting sysadmins”

        Contradiction in terms?

    2. Anonymous Coward
      Anonymous Coward

      Since we are being profane, Fsck linux Desktops.

      All of this arm waving and jabbering and finger-pointing, over the least relevant of the desktop operating systems. Linux desktops are by and large a vanity project, a passionate hobby for some, but a fart in the wind as compared to what Unix servers are doing, and a raindrop falling in the ocean compared to windows and osx laptops and desktops. Chromebooks leave the underlying os largely invisible to the user, as do the tablet OSes.

      So if LibreOffice only ships in a Snap or Flatpack, very few people will care because very few people will notice the performance hit from the virtualization overhead.

      Complications arrive when you start dealing with over virtualization of packages that are part of the regular core operations on a server. More complications arise when that server image is itself virtualized.

      All of the Desktop centered design is side effect of the developer community succumbing to "It works on my Laptop" thinking. Plenty of the devs making commits aren't building or developing on or for professional grade systems. Sometimes, like for an email client or an open source media editor, that isn't really an issue. But as the bloated packages become the new industry standard, server ops are having to be more disciplined about anticipating the complex impacts these things can have, as "It works on my Laptop" programmers can't be blindly trusted not to make braking changes that will destroy large enterprise production environments without even realizing it.

      1. SloppyJesse

        Re: Since we are being profane, Fsck linux Desktops.

        > very few people will notice the performance hit from the virtualization overhead

        More likely, very few people will realise that the sluggish performance they experience is due to the way the app developers have shipped the product - they'll just blame the application itself. And then probably get told to go buy a bigger machine.

      2. Norman Nescio Silver badge

        Re: Since we are being profane, Fsck linux Desktops.

        ...programmers can't be blindly trusted not to make braking changes that will destroy large enterprise production environments without even realizing it.

        Presumably, such changes would be a drag on performance?

        I'll get my coat...

    3. Orv Silver badge

      Re: No.

      Redhat and Ubuntu can whip their dead horse "packs" and "snaps" till the end of time, but no self-respecting sysadmin's gonna allow those things onto a enterprise server.

      But they love docker containers, which are basically the same thing...

      1. Anonymous Coward
        Anonymous Coward

        Re: No.

        " But they love docker containers, which are basically the same thing... "

        Wait, "they" do?

        I imagine containers have their place and use, but there's no great love for them round our place....

      2. Russ T

        Re: No.

        I would argue that docker containers are specialised and built for the purpose of maximising virtualisation resources.

        These FlatSnaps sounds more like virtualising something because it's the only way to standardise it, and actually overuses resources unnecessarily.

  15. VoiceOfTruth Silver badge

    Linux is going down the bloat path

    And all because the whole Linux community cannot come up with one definitive way to package apps and libraries. So instead there are the worst options of the immense bloat of fully packages apps + libraries. It sounds good but it isn't. It is bloat and lazy. Backups come to mind? If you can't do a bare metal recovery you do not have a backup.

    The Linux crowd, me included, used to laugh at the NT people whose backup/recovery process was reinstall the OS, reinstall all the apps, reinstall the data, because in those days handling open files on NT was a real PITA. If anyone tells me now that Linux recovery is reinstall the OS, download all the apps again like it's some kind of smartphone, put back the data then get lost. App bloat (due to containerisation) = backup bloat.

    I have run into this on Macs too. NamelessImageApp stores its thumbnails as well as all the metadata in a SQLITE database. Very good when you have a few photos. But when your SQLITE database is 2GB in size, if you so much as tag a photo that means a 2GB backup. Now multiply that by the number of people who have the same NamelessImageApp. This idea of backups is lost on people who it seems to me do not do backups or have endless backup disk space.

    Gnome is bloated. Why? Perhaps because the people who code Gnome don't understand about using resources wisely. I have seen Yum update crash due to using too much memory on a small system. The solution is not 'get a bigger boat'.

    1. ChoHag Silver badge

      Re: Linux is going down the bloat path

      Packaging has never been the problem, even when we only had to decide between deb and rpm (not forgetting you slack, just no-one's ever cared). Packaging is a symptom caused by the real problem which can basically be summed up as CADT and the pathological inability of developers to Leave The Code Alone.

      Flatpaks and their ilk will just create Yet Another Standard.

    2. steelpillow Silver badge

      Re: Linux is going down the bloat path

      Hey, we agree on something. However the latest bloat road is only there for those who want it. It's the main user community who are running along it like lemmings, we don't have to.

      Just like the roll-your-own distros, rpm, deb and their kind will still be here for those who want to package for their favoured distro.

    3. keithpeter Silver badge

      Re: Linux is going down the bloat path

      " whole Linux community cannot come up with one definitive way to package apps and libraries"


      Who is the 'linux community'?

      What organisation is there that could manage the standardisation process?

      How could standards be enforced?

      How would the thousands of different independent upstream projects ensure that their development cadences coincided to ensure compatibility with standardised libraries?

      Best of luck

      1. CRConrad

        Once upon a time...

        Quoth keithpeter:

        What organisation is there that could manage the standardisation process
        ...there was an organisation called The Linux Foundation.

        How could standards be enforced?
        This foundation created / sponsored the creation of a specification called the Linux Standard Base, LSB.

        How would the thousands of different independent upstream projects ensure that their development cadences coincided to ensure compatibility with standardised libraries?

        By following the LSB, perhaps?

  16. Mr D Spenser

    Monolithic or Dynamically Linked Applications?

    This is an age old argument that will never go away. Haven't looked but I am sure you can find a chart that attempts to show all the pros and cons of each philosophy. The comments so far have done a good job of pointing out the issues and in the end it boils down to personal preference. Start up time, memory usage, disk usage, stability? Plenty of hills to go stand on and defend.

  17. Alistair Silver badge

    Ouch. There are some misconceptions

    I have 4 flatpak apps that I use. Perhaps fortunately 3 of them share the Nvidia driver library kits. All 4 needed updating for (I dont recall precisely) a glib or glibc change.

    a) when the kernel I'm running gets updated, this brings in new nvidia drivers at the hardware level, and thus the flatpak version needs updating. It pulls those drivers *ONCE*, that gets used by all three flatpaks.

    b) the glib/glibc change, the flatpak package, only one was pulled and applied to all 4 flatpacks.

    Being sandboxed, in order to correctly *use* two of those flatpaks, I did have to add disk paths to the "allow" list. It does add some complexity to the deployment process, but it is certainly manageable. Its *not* something I'd hand to a youngster with less overall compute experience, but perhaps having dealt with VAX to Mainframe data transfers, TCP over SNA, BNC (both) to ethernet, and about 12000 other weird and wonderful moves from (type x to type y) of computing functionality I'm just not that intimidated.

    Yes, they chew up more disk space for similar code. But then my Gentoo vm uses 28% of the disk space that my fedora vm uses with pretty much identical software stacks. The Gentoo instance takes about 150 *times* the build time of the fedora one, but it gets there eventually. What I will note is that the flatpak versions of specific apps I'm using are far easier on my mental health than what would be involved with deploying them without flatpak. None of the 4 are ...... common linux applications, 3 pertain to video games I play and one is for managing a (now) rather extensive library of music on our household NAS. (I've been digitizing my mothers 800+ LPs over the last few months). All are only available directly from github, and there are not currently rpms for them in fedora, or rpmfusion. Flatpak gives me the package in a format I can be *reasonably* assured is functional and mostly current.

    I do *not* think that in it's current structure and form that (at least) flatpak is *complete and ready for prime time* but I do think that it has enormous potential to make far more non distro based software consistent and available than is currently out there. I would suggest that within reason, it would be possible for *cough* certain large video game producing companies *cough* to package their product with a tweaked and tuned version of wine, making it possible for them to reach a (admittedly currently small) niche market that stands a chance of growing.

  18. -Sx-

    Is it Secure ? Really secure?

    Having been working in the Computer/IT industry for 40 years I too got Linux and other Open Source things since the beginning also. I must say I respect your opinion...but face it - it's your opinion that Flak/Snaps et al are more secure.

    In my opinion they haven't been around long enough to see or even realize the true impact on security... sure, we already see the impact when a few simultaneously run are having on localized performance but not security.

    Back in the day I felt safer when I knew many other eyes were reviewing and seeing what people were contributing to open source. But not now. Now we are assuming developers are acting in the consumers best interest.

    The primary advantage of closed source is you have a primary source to bitch and and complain to when things go sideways ... but not in open source. I for one am trying strongly avoid Snaps and the like. I do not need to be bleeding edge - I just want crap to securely and be consistent.

    I truly understand I am imaging a world where consistent security happens - without a huge amount of customer involvement ... but hey :) a guy can dream. ;)

  19. CGBS

    Yes, more layers of abstraction that make doing anything a PITA. But the isolation what about that amazing thing? The thing that you usually have to turn off or grant the normal permissions to the second you install a program because OBS needs to access a cam or something needs the GPU? And why not just install 2 more version of NVIDIA drivers, one always leads to such stability. No, no, you use a portal to access stuff, see? We want to introduce the intentionally obtuse and over engineered way of Red Hat thinking coupled with the Microsoftian over control of Canonical to everything, you will love it. Or else.

  20. Anonymous Coward
    Anonymous Coward

    Fighting over molehills

    "What I think is really going on is there's one group that favors Red Hat, while the other loves Ubuntu. In the meantime, I'd like to remind both sides that Windows users are snickering at them from the mountains as the Linux distro fans fight over molehills".

    This. 1000x This. Stop shooting each other in the foot.

  21. TiredNConfused80

    Since before some of you were born....

    ...downloaded in 1991....

    Oh crap I'm old aren't I....

  22. Eddie G

    Static or Dynamic

    Would be interested to hear why (some code at least) can't be distributed as a big, old statically linked execs (no dependency hell)? I certainly do this with scientific code (that is bounced around various machines).

    The arguments against concerning size/start up times don't seem to be an issue when Snap etc have similar issues. Updates require a full replacement, but (again) that's what the container systems seem to do anyway.

    1. Orv Silver badge

      Re: Static or Dynamic

      When I've tried that, I've found that many libraries are no longer designed to be compiled statically and don't link properly if you try.

      1. bazza Silver badge

        Re: Static or Dynamic

        That's slightly, irksome. Also, how!?!

        I'd be grateful to be pointed at an example :-) Thank you!

        1. Richard 12 Silver badge

          Re: Static or Dynamic

          Private symbol (name) collisions, plus stripping unused code for the most part.

          Symbols that appear unused get removed when statically linked, which then means load-by-name doesn't work as the function or (more often) static data simply isn't there in the final binary.

          There's fairly simple workarounds for this, but it's easy to miss something.

          Then there's collisions between private symbols. When they're in a .so, the private symbols for and can't affect each other - the other didn't exist when linking, and the so loader only attempts to dynamically link the public symbols.

          When built statically, the linker does know both are there. Sooner or later A and B use the same name and you're in multiple-definitions hell. There aren't any simple fixes - C++ namespaces and name mangling were invented to help developers fix this when it happens, but...

          1. bazza Silver badge

            Re: Static or Dynamic

            Ah, I see. Yes, that'd be a nuisance! Thanks!

            Takes me back to coding for V Works, where every symbol was machine global, not just task global. You'd have to make sure to add a module-unique prefix to every function and variable. Useful though, you could globally overload things like printf if you wanted. NASA use VxWorks and have relied on that to fix things in orbit!

  23. bazza Silver badge

    Snap, so far as I can see, slows down application startup and gives apps a funny and disatisfying experience of browsing filesystems. Also, the mount cli program is now useless because there's so many things mounted that aren't actual real storage devices.

    I'm struggling to see why this is better for the end user. I can see that it's better for the app developer...

    1. geekbrit

      This is what made me decide to move away from Ubuntu after fifteen years. Working with the filesystem just became clunky and irritating.

      It was bad enough having to move downloaded files from Firefox's own obscure little downloads folder, but that was just the start.

      I regularly have to combine pdf files into a pack for a website. I have a little automated script that uses pdftk. Suddenly, this is available only as a snap, and it can't even see the website files.

      What's next? Vim becomes a snap and you can only edit files in your home folder?

  24. Groo The Wanderer

    I live in the REAL world, and here, Flatpacks and Snaps do NOT cut it for doing production-level software development. You can't hook into them properly like you can a filesystem based installation.

  25. DS999 Silver badge

    This is solving a problem that doesn't have to exist

    If the major Linux distros got together and agreed on some sort of standards there would be a fixed target for application writers and you wouldn't need to basically create it for yourself in the form of a stripped down OS in a container.

    What's really dumb about this is you need a separate container for different apps. At least the CONTAINERS could be standardized so only pay that overhead once!

  26. anderlan

    I deserve to be roasted for this,

    but tell me again why don't we just statically link everything if we want to widely distribute an app and call it a day?

    1. that one in the corner Silver badge

      Re: I deserve to be roasted for this,

      No roasting from this direction.

      > why don't we just statically link everything

      Some people just seem to love complexity?

      If you have 147 shared libraries then you have a Grown Up program that Needs An Installer or, More Manly yet, you get to learn how to set up a Snap wotsit or a Flat thingie or shove it into Docker no matter how pointless that is for your Users' Use Cases. But your CV grows.

      Used to get this with Windows applications as well: I'll arrange everything to build statically, with as few DLLs as possible (sometimes Windows forces it on you - e.g. using multimedia timers) then someone comes in for a short time and poof! DLLs everywhere ("it saved so much time, didn't have to compile anything"), no idea if they are all still needed ("I just copied them all in, that was easier"), we don't have the debug symbols for them...

      Still had to create a new release just to distribute, e.g. an updated SSL library, whether that was statically linked or in a DLL ('cos you can't rely on the User updating "just this one DLL" and nor should you).

      Sigh. Time for Grumpy Bedtimes.

  27. Mr. Balise

    Even older guy disagrees ..

    I was raised on Unix V7 on PDP 11/45, now run Manjaro (Arch) on 15 year old homebuilt PC, runs perfectly. systemd bad enough - won't be using any snaps and flatpack things.

  28. georgezilla Silver badge

    " ... but at the end of the day, it's all about ... "

    Profit. Not just some. Not just a lot. But about huge profits. Obscene amounts. And fuck everything and everybody but us.

    It's about "it's not our" and everyone MUST USE OURS!!!

    And if you believe anything else, you're just fucking wrong.

  29. nvmd

    Goes to show that even in the Linux world there's people really far detached from reality. Snap and to a little bit lesser extent, Flatpak, are a cancer.

  30. Matthew "The Worst Writer on the Internet" Saroff

    Not a Linux Person, nor a Sociologist, But………

    It seems to me that any discussion of Linux issues eventually ends up with people arguing over systemd.

    I don't get it, but then again, I don't have to deal with it.

    (OK, I read my email on using Bash with SSH, but that's not being a Linux person)

  31. Anonymous Coward
    Anonymous Coward

    It's not the principal of the thing ...

    ... it's balancing various metrics of convenience, nuisance, time, and space.

    More snaps requires more memory and longer startup time, but less chance of conflicts and trying to fix those.

    If building a container, through, to replicate thousands of times, it would be ridiculous to include snap.

    1. CRConrad

      Re: It's not the principal of the thing ...

      ... It's the principle.

  32. mIVQU#~(p,

    I’ll stick with apt thanks.

  33. steelpillow Silver badge

    Horses for courses

    So we are just adding a new layer to the install ecology; first was build-install for every app and dependency, then install package-plus-dependencies, and now install containerized-in-one-lump. Each layer gets hungrier, but easier to play with.

    Where next?

    I have this great idea; don't just stop at one app plus its dependencies, because then you get hundreds of clones of the common dependencies. So package up a whole suite, all using the one set of dependencies. Containerize the whole thing with its OS. You could call it, i dunno ... a distro?

    But seriously, this "containerization makes it more secure" is just marketing BS from the coloured-pencil subconscious in all of us. How can a hundred OS clones be more secure than a single OS? As malware learns to penetrate containers (yes, it is happening), that's a hundred instances that need patching, not just one. Now that is what I call insecure; maybe not yet, but give it a couple of years and you read it here first. Apps that need to, like web browsers, tailor whatever sandboxing/containerization is appropriate to them. That is how it should be.

    Now, about that "building distro-specific packages is such a pain argument". Go ask the Devuan maintainers how hard it is. 99% is automated and they just have to deal with a few loose ends. If you employ hordes of packager-uppers for each distro you support, you are still living in the Dark Ages. Enjoy the choice.

  34. Binraider Silver badge

    As I see it the main problem is that those package managers and libraries wrapped around Yum or RPM are also distributing Flapak and Snap package.

    Do I want Ardour, Ardour (Flatpak) or Ardour (Snap)?

    Now for us, it's not a big deal, but for normal users this is just yet another source of condfuzeledness.

    Setup.exe has its own bunch of problems of course, but user confusion over what to click to install whatever is zero.

    An idealised Reality is somewhere between the two of course.

  35. Stephen Booth

    Containers vs static linking

    One could argue that this form of containerised packaging is frequently just a convoluted way of converting dynamically linked applications into statically linked ones. I'm sure there are a some more complicated use cases out there but a lot of the time the container is only there to provide the dyamic library environment the application was compiled for. In these cases staticly linked binaries would be equally portable.However its easier to build a container than rebould all dependencies for static linking and there is not good tooling at the moment for directly converting dynamic binaries to static.

    Containerisation is a good route for making a cross-distro release starting from a release built for a particular distro but developers could also invest more in building static releases.

  36. Carlie J. Coats, Jr.

    Ulrich Drepper ("shared libraries considered harmful") and his ideologues did away with static executables, on the claim that shared libraries reduced resource use.

    And now these same people want each executable to run in its own (static!) entire virtual machine (with its own full set of libraries, etc.)!

    Why do I not trust this ??

    1. Binraider Silver badge

      To say nothing of the loss of auditability on the content of packages when each one is, in a Python-esque parlance, a separate environment (I won't refer to them as VM's as they aren't examples of hardware virtualisation).

  37. Adrian 4

    making packages

    You suggest that making packages is a great deal of work for the pacvkage maintainer. And no doubt it is.

    But some part of that work has to be done by the snap builder. And the cost of effectively translating a package build made for just snap instead of all the viable distributions is pad on every startup by every user. That isn't a good tradeoff.

  38. Erik Beall

    Virtualization with benefits. Just not for the user

    Docker, snaps, flatpak, have benefits, so we're told. They tend to be: security (particularly for docker this is claimed), simplicity of deployment and maintenance, and robustness. The first is mostly false, the second is really only true for the sellers (most particularly of enterprise B2B run on cloud, no need to have anywhere near as much field support engineers or configuration developers if the deploy environment is identical) and the third is patently false. There are problems that need creative solutions, but this is just the usual land grab. it's always present and always will be. Keep pointing out the emperor has no clothes and hopefully we can prevent snap becoming required in our work environments.

  39. Blackjack Silver badge

    Considering the odyssey I had to do to make the Video Editing App Shotcut work in Linux Mint? Not any time soon.

  40. stratofish


    I had to free up some space the other day and found that I had installed Obsidian (a Markdown text editor/organiser) as snap instead of deb. It took 1Gb... For essentially a text editor...

    Now part of the reason was because of a bug it had installed multiple versions of the NVidia driver alongside it. I never found out why a text editor required graphics drivers either. I uninstalled it and downloaded the .deb from their website instead at ~70Mb. Still bloated but a big improvement.

    As a developer it is embarrassing the complete lack of care or pride taken shovelling out this shit. It's like we are trying to meet some kind of inverse Moore's law by making everything slower and bigger the faster the systems get

    1. Anonymous IV
      Thumb Up

      Re: Bloated

      > It's like we are trying to meet some kind of inverse Moore's law by making everything slower and bigger the faster the systems get.

      Welcome to the world of Windows...!

  41. Joe Cincotta

    Developers Developers Developers

    This is all true and correct, but the missing link is - especially on the desktop - a consistent and simple way of managing cross-container access. The tools developers use like DB workbenches and other GUI thingies all need access 'out of the sandbox' and I found this to still be inconsistent and cumbersome with the likes of SNAP and FLATPAK. Solve this and I think it would make the transition complete. You do end up with an interesting premise though. How much apt? How many things should distro maintainers support on apt? How much should they rely on, say, SNAP? Where is the boundary?

  42. nijam Silver badge

    > The reason is application developers don't want the hassle of rewriting their code to work on multiple, mutable Linux distributions.

    That is very much a weak link in your reasoning.

    Maybe we've moved on from the bad old days when MS were accused of "encouraging" developers not to port their packages to Unix (or Linux). More likely commercial developers have faith in the circular reasoning that no-one writes for Linux because the market's too small, and the market's small because (fill in name of package) isn't available for Linux.

  43. Anonymous Coward
    Anonymous Coward

    I had avoided using snaps to the extent of completely removing them and using apt to get Firefox on Ubuntu.

    However a package I use on my server, tvheadend isn't well supported on Debian so I spun up the snap version. It wouldn't let me save onto my harddrive so I had to create a system link to my real harddrive. There were permissions issues. I went back to the apt version.

    The point is none of this is well documented and forced you down the path of command line fixes. This is hardly user friendly and sure to have the opposite outcome to the rational for snaps.

    I don't need to be locked out of my system resources by default and when I am forced to use a system that I don't want there should be tools to easily manage them in a user friendly way. Snaps are a disaster in usability and will surely turn people away from adoption as they have with this 10 year Linux user.

    1. CRConrad

      Boo hoo...

      ...forced you down the path of command line fixes. This is hardly user friendly
      The command line is how adults use computers. It's really not that hard, once you grow up.

  44. skataf

    Distros should come up with a standard.

    Distros should gave a convention and agree on a common installation standard without using this snap and flatpak vm overheads. I mean after all this is Linux. Should have a common base.

  45. jeremya

    Why not go the whole hog and "flatpak" the entire O/S as well.

    None of this crappy waiting for hours while arcane scripts do their stuff to merge into the even more arcane systemd.

    Just have a big blob that is the O/S and you don't do anything to it.

    Download. Run


  46. unimaginative Bronze badge

    So, when I say it's time to wave bye-bye to using package managers such as apt or dnf and replace them with containerized package managers such as Appimage, Snap, or Flatpak, I do have a clue about what I'm talking about.

    So instead of packaging for,say, Debian and Redhat, it gets packaged for Appimage, Snap and Flatpack?

    So, for example, the Ubuntu releases page currently lists over 30 different versions of Ubuntu which are currently in active support.

    Actually, there are only five. Releases of point versions are not supported unless you upgrade - e.g. if you are running 14.04.1 you would need to have upgraded until it is identical to 14.04.6

    The reason people want to use the older versions is because they want the extreme stability of running exactly the same software after a decade. Snap and flatpack does not deliver that.

    Snap and flatpack work for desktops, but few people want to run 10 decade old OSes on a desktop (not even those of us who will keep computer running for more than a decade).

    I actually like the idea of running at lease some applications in containers (better security) but my experience with Flatpack has not been great so far.

  47. fleamour

    You cant spell zypper!

  48. david1024

    Flatpack/snaps are a party we left 25 years ago

    The issue here is overhead and maintenance on the developer and admin/user sides. This was how it was done and it used a ton of disk space. Now I have to track the flat/snap versions and my os. The reason for tools like rpm and apt was monolithic applications weren't working... But here we are... Wasting time/resources on an ultimately doomed, monolithic path with a new catchy name.

  49. fairwinds

    Nuke from orbit

    Thank you for this very informative article. A quick check shows that /usr/lib/snapd/snapd is consuming a mind-watering 1.6GB of virtual memory, even though I don't use snap. So I've just updated the Ansible script to add:

    - name: Remove snapd and other diseases


    name: snapd

    state: absent

    It's now running across the fleet.

    Now, if only I could figure out how to do the same for systemd...

    1. Anonymous Coward
      Anonymous Coward

      Re: Nuke from orbit

      ... if only I could figure out how to do the same for systemd ...

      It's has already been figured out for you, back in 2014.

      Here you go.

      You're welcome. 8^D


  50. Norman Nescio Silver badge

    Are Snaps and Flatpaks optimal?

    Are the problems that are ostensibly solved (or at least, mitigated) by Snaps and Flatpaks best solved by using Snaps and Flatpaks?

    It strikes me that the volume of the debate shows that there are strong opinions on both sides. It's quite possible that both sides are wrong - so that 'traditional' package management has failings addressed by Snaps and Flatpaks; but Snaps and Flatpaks have their own limitations.

    Perhaps there is a better way? I've no idea what it might be, but if we are going to change something, it would be good to change to something that is better than both, rather than exchanging one set of problems for another. Quite what that would be, I have no idea*.


    *Well, I have some ideas, none of which should see the light of day in this forum. They'd probably be laughed at.

  51. arobertson1

    Snaps and Flatpaks are a bloated security nightmare

    Okay, first off, could somebody peel two different coloured stickers off a Rubik’s Cube, swap them, jumble it up, and give it to all the KDE users. That should keep them busy for a while. Next tell all the GNOME devs that Pop!_OS has themed the window title bars in a really cool way. That should keep them busy too. Right, now the silent majority can get a word in edgeways…

    Snap and Flatpak are the worst ever idea that has come to Linux in a long time. The idea of wrapping an operating system around an app and distributing it as a binary blob is stupid. It might work for the server market, but on desktop it’s just slow and bloated.

    Take OBS Studio, on Fedora (RPM Fusion) the download size is a mere 7.6Mb and the installation size is 25Mb. The Flatpak is a whopping 198Mb download and a 520Mb installation size. So that’s a staggering 26 RPM’s to just 1 Flatpak in download size, and an unbelievable 20 times in size on your hard disk! Really?

    In a world that’s becoming more energy conscious, how can this be better for the environment if you are doing twenty times more disk reads just to load the application? What about all the e-waste as you throw out all the (now) junked computers. Wasn’t Linux meant to support older computers? What about your poor SSD now wearing out at twenty times faster?

    As for security, what’s going to happen when the next Heartbleed comes along and that TLS library embedded inside that Flatpak is vulnerable. Are the devs who created the software going to update it? Maybe, maybe not. There are no guarantees on this. Will Flathub or Canonical’s Snaps remove the app due to the vulnerability? Will they leave it for a while until the dev updates the library that’s vulnerable? Will they just ignore the problem completely?

    Then, there’s containment... If there’s anything I’ve learned in forty years of computing, it’s to accept the fact that nothing is 100% secure. Here is a quote from O’Reilly’s Java in a nutshell, “Another layer of security protection is commonly referred to as the “sandbox model”: untrusted code is placed in a “sandbox”,” where it can play safely, without doing any damage to the “real world,” or full Java environment.” Oh the optimism of 1997! Also, what a load of crap! How many Java sandbox escapes have there been? How many virtualised hosts have been popped? How many AppArmor failures? And... How many “immutable operating system” failures will there be?

    You see, if you are going to run potentially vulnerable code in an “immutable operating system” then you better make sure that it really is immutable. Unlike say these Snapd CVE’s from only four months ago as I write: CVE-2021-44731, CVE-2021-44730, CVE-2021-4120, CVE-2021-3155. You have everything from privilege escalation to snap confinement escape. Are you sure that app can’t escape it’s confinement?

    If you look at the current system with RPM or Deb, then the vulnerable library would get upgraded with the system updates. In a lot of cases the software that uses the library wouldn’t even know, and if it did break some software then the devs would have to update their own software otherwise it would be perpetually broken. Either way the vulnerability goes away.

    To be perfectly honest, I would rather run ChromeOS than Linux using containerisation - far more choice with far better support. Not looking forward to the absence of GPL apps, mandatory anti-virus software, endless permissions maze, and paid calculator apps in the Store though... Oh wait, that’s Linux hubstore in the future too!

    1. drankinatty

      Re: Snaps and Flatpaks are a bloated security nightmare

      As one of the thinking silent majority, a tip of the hat to you sir. You have fully captured the sentiment. All you need read in the article is the passage after "The goal?" to understand why there are a few so willing to kick sand in the face of what Linux stands for to push for containers.

  52. Mike_R

    Will Flatpak and Snap replace desktop Linux native apps?

    This is where we recall Betteridge's law

    See e.g.

    'nuff said.

  53. Anonymous Coward
    Anonymous Coward

    Lots of snaps are too bloaty

    If one has too many snaps, one can end up with an unbootable system:

  54. Anonymous Coward
    Anonymous Coward

    Garlic bread, it's the future…

    “I haven't found either to be that big of a deal. I run my Linux desktops on modern systems with powerful processors, 16GBs of RAM, and speedy SSDs”

    I’m delighted for you, but where I work most users have *at least* three year old, budget-friendly i5s, 8GB of RAM, and - ok - SSDs, but fairly slow SATA ones. On these systems, the performance hit of, say, the Firefox snap is big enough to make it worth installing the deb, even before you run into the pain of getting Kerberos to work. The argument that containerisation improves security always seems to neglect the fact it also makes it infinitely harder for essential processes to communicate with one another, resulting in a terrible user experience that we’re all supposed to be grateful for because devs no longer have to spend so much time compiling.

    I’m sure it is the future, but given that not everyone is running current-gen hardware, it’d be nice if the present could continue until the low end market has caught up enough to join in.

  55. Anonymous Coward
    Anonymous Coward

    Snap ignores home directory from getpw(), so it is useless

    Snap cannot handle home directories that are not in /home. There are kludge-arounds, but one might as well replace all the snap applications with "real" applications that follow conventions.

  56. drankinatty

    Are you out of your ever loving bleeping mind?

    ""I run my Linux desktops on modern systems with powerful processors, 16GBs of RAM, and speedy SSDs. Frankly, neither performance nor a lack of RAM has been an issue for me." -- well goodie for you (horse clap...)

    What of the many that have finally got the parts for a dual-core Athlon, 2G of RAM and a 500G 5400 RPM drive -- I guess your view for a vast many "those people" is (... it sucks to be you...)

    That is the point. Unless you have the latest greatest system (SSD absolutely required) because you "must load not just the application but the containerized operating system" including "all its necessary libraries and associated files" -- from the container, on top of a containerized OS on top of the OS you are already running... (real user-benefit there) Not to mention the multiple versions of "all its necessary libraries and associated files" that will be duplicated, many times over, in containers for apps built using common libraries or toolkits.

    Where we are in complete agreement is "All containerized apps run slower than their native counterparts." (full-stop)

    Let's just dumb-down Linux (the same way the KDE and Gnome devs have done their respective desktops) to the point it offers no real benefit over the Redmond offering. Let's admit devs and distros are just too dumb to implement FHS fully and throw the baby out with the bathwater.

    What's really going on here and who are you? Some spokesperson from a fledgling containerized software consortium? Why would anyone so blatantly throw sand in the face of the Linux ethos so completely?

    Ah.., the quiet part out-loud... " The goal? To build ["a vendor-neutral commercial and technical ecosystem to publish and distribute end-user applications"] for Linux PCs." (bracketing of internal quoted passage added). So somebody has finally cooked up a scheme to monetize app development where the end-user pays and you are the enthusiastic spokesman for that? Sure sounds like Redmond isn't looking on from a mountain-top, but is right there in the burrow bending the mole over...

    No thank you.

  57. Adam Inistrator

    libreoffice snap is a slug on Ubuntu for some reason. Both to start and to open large or password protected documents so I just went back to Ubuntu's standard deb despite it being not the latest version. I dont know if it is fast enough for other people in general but I hadnt time to invest.

    Many distro's are mulling skipping libreoffice in standard installations nowadays since people are using web/cloud solutions like Office 365 more and more so the need for locally installed monsters is reducing over time.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like