back to article Malicious SSH backdoor sneaks into xz, Linux world's data compression library

Red Hat on Friday warned that a malicious backdoor found in the widely used data compression software library xz may be present in instances of Fedora Linux 40 and the Fedora Rawhide developer distribution. The IT giant said the malicious code, which appears to provide remote backdoor access via OpenSSH and systemd at least, …

  1. yogidude

    CVSS 10 eh?

    Lucky it wasn’t in the kernel then.

    1. Jamie Jones Silver badge
  2. Will Godfrey Silver badge
    Facepalm

    SytemD?

    What on earth is the connection between that and xz?

    1. Mike007 Silver badge

      Re: SytemD?

      I suspect systemd is used to reconfigure the ssh service in some way. If you simply edit the config file someone might notice...

      Getting something like this in to every server that installs an OS update without anyone noticing is the sort of thing that earns you a sophisticated attacker badge.

      1. Blazde Silver badge

        Re: SytemD?

        Sophisticated Defender Badge and a fierce hat tip to the one who found it (appears to be Andres Freund)

        Although his method of initially spotting it is unconventional (from https://www.openwall.com/lists/oss-security/2024/03/29/4 ):

        After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer: The upstream xz repository and the xz tarballs have been backdoored.

        If the attacker was only a little more careful/competent..

        1. Doctor Syntax Silver badge

          Re: SytemD?

          If the attacker was only a little more careful/competent

          Don't knock the developer. You know how it is:

          "Ship it"

          "But comrade, I need another week or so to fix performance problems."

          "Never mind that. It runs so ship it."

    2. David 132 Silver badge
      Thumb Up

      Re: SytemD?

      If you have to ask that, you really haven't been paying attention to the inexorable feature creep of systemd :)

    3. Fazal Majid

      Re: SytemD?

      systemd is linked against liblzma, which has the backdoor. OpenSSH has nothing to do with systemd but many distros tamper with it to interface with systemd, thereby introducing this vulnerability.

      1. ldo

        Re: thereby introducing this vulnerability.

        The vulnerability was in liblzma, not systemd. On my Debian system, about 180 other packages depend on liblzma. So all of them had this vulnerability. And the same library is also used on other, non-Linux, non-systemd, platforms, too.

        If it wasn’t for the systemd→SSH connection, the vulnerability would still have gone unnoticed.

        1. ChoHag Silver badge

          Re: thereby introducing this vulnerability.

          systemd is (a symptom of) the brain virus that makes people think it's a good idea to plug things into ssh.

          If it wasn't for the systemd/ssh connection, this vulnerability would probably not have been made.

          1. ldo

            Re: a symptom of) the brain virus

            The vulnerability was in liblzma. On Debian alone, that is directly used by about 180 other packages. All of them were backdoored. The only reason the backdoor came to light was because it caused SSH to misbehave. If it weren’t for that being noticed by one sharp-eyed person, the vulnerability would still be there right now.

            1. Tom7

              Re: a symptom of) the brain virus

              Technically true, but the backdoor was specifically aimed at sshd - it only triggers if `argv[0]` is `/usr/bin/sshd`, for a start. The purpose appears to be to short-circuit certificate verification if the certificate fingerprint matches a known value.

              So the backdoor is invoked on anything that uses liblzma (and I think that's actually a lot more than the packages that declare a dependency on it) but it's not quite accurate to say they are all backdoored - they don't all open a back door into your system.

    4. Jamie Jones Silver badge

      Re: SytemD?

      From: https://www.openwall.com/lists/oss-security/2024/03/29/4

      "openssh does not directly use liblzma. However debian and several other

      distributions patch openssh to support systemd notification, and libsystemd

      does depend on lzma."

    5. ldo

      Re: connection between that and xz?

      It’s a compression library. Lots of things used compression libraries.

    6. R Soul Silver badge

      Re: SytemD?

      Surely you know systemd is the metastasising cancer that gets its poisonous tentacles into *everything* on a Linux system? It's the perfect vector for introducing malware and vulnerabilities - as this latest incident proves.

      1. ldo

        Re: systemd is the metastasising cancer

        The vulnerability was in a compression library directly affecting SSH, and only indirectly affecting systemd.

        Are systemd-haters like the anti-fluoridationists of the open-source world? Discuss.

        1. R Soul Silver badge

          Re: systemd is the metastasising cancer

          The vulnerability was in a compression library directly affecting SSH, and only indirectly affecting systemd.

          It was only indirectly affecting in the same sense that novichok indirectly affects the elimination of many of Putin's critics.

          systemd was responsible for injecting the vulnerability into the SSH daemon. SSH doesn't use liblzma. systemd does. The toxic liblzma posed little to no risk until the systemd cancer got involved.

          Are systemd-haters like the anti-fluoridationists of the open-source world?

          No. They are the voices of sanity who speak truth to the delusional anti-vaxxer nutjobs of the open source world.

          systemd is totally fucked up for two main reasons. [1] It tries to do far too many things and does all of them badly. Unlike init (the original PID 1) which only did two things and did them properly. [2] systemd interferes in the operation, configuration and maintenance of too many system services and components. This creates a rat-hole of spaghetti dependencies that almost nobody can hope to understand or debug. For instance, WTF does an init tool have to have any sort of dependency on any shared compression library?

          1. ldo

            Re: systemd was responsible for injecting the vulnerability into the SSH daemon

            No it wasn’t. The vulnerability also affected other platforms, like macOS, that don’t even use systemd.

            1. Anonymous Coward
              Anonymous Coward

              Re: systemd was responsible for injecting the vulnerability into the SSH daemon

              Only in your imagination.

              launchd, Apple's equivalent of systemd, doesn't have a dependency on liblzma. So any malicious code in that library can't infect the init system on macOS. Even if it could do that in principle, Macs have two more defences which stop that from happening.

              Apple-supplied executables are signed. That means if they get modified, the signatures won't validate unless someone can compromise the root CA that ultimately signed those executables.

              Second, macOS has System Integrity Protection enabled by default. Which means it's impossible to modify executables like sshd without rebooting and disabling SIP in single user mode. This is why updating macOS is a pain in the arse.

              The original vulnerability was/is enabled by systemd. Just take an objective look at the facts. systemd trusted a dodgy shared library that should never have been anywhere near an init system. And since systemd's services generally run as root by default, systemd was able to spread that dodgy library's malware into just about anything - fortunately in this case, just sshd. This is a direct consequence of systemd's fundamentally flawed design and implementation.

              BTW, Red Hat's CVE is crystal-clear about this vulnerability: "While OpenSSH is not directly linked to the liblzma library, it does communicate with systemd in such a way that exposes it to the malware due to systemd linking to liblzma."

              1. ldo

                Re: launchd, Apple's equivalent of systemd, doesn't have a dependency on liblzma

                Apple’s code doesn’t use it, but there is Open Source software available for the Mac that does.

                1. This post has been deleted by its author

              2. ChoHag Silver badge

                It's never just

                Even less so when it's ssh.

                "Just" sshd is "just" laying on the railway tracks. "Just" sshd is "just" pressing the sharpened blade gently against your jugular in a bustling crowd.

                "Just" sshd is short for "entirely" fucked.

                There is nothing fortunate about it.

                (Unless you like the schadenfreude of watching systemd shit the bed, which is hilarious. From all of us who predicted this: har har!)

                1. Anonymous Coward
                  Anonymous Coward

                  Re: It's never just

                  Indeed. Anything which compromises sshd means things are entirely fucked - no question. But the vulnerability could have been much, much worse. The malware only went after ssh. Imagine the carnage if it got systemd to #also# corrupt the kernel, the C compiler, sudo, emacs, critical certs, etc or bury itself deep in lots of system libraries.

                  Watching systemd shit the bed is of course a very entertaining spectator sport. It's sure to catch on and run for a very long time. Poettering's almighty turd is the gift that keeps on giving. Maybe schadenfreude is the dish that's best served cold, not revenge.

                  1. ChoHag Silver badge

                    Re: It's never just

                    You think they aren't waiting in the wings? This would be the access point. Successfully attacking ssh gives you all of those things.

                    *Entirely* fucked.

                    *That* is why the "stuff systemd into everything" crowd is so entertaining right now. Systemd is not just paint your root password on a billboard, it's turn everything it touches into potential back, side or front door that nobody on the inside even knows could exist while everybody outside is constantly looking for them.

                    1. This post has been deleted by its author

                    2. Anonymous Coward
                      Anonymous Coward

                      Re: It's never just

                      Quite so.

                      At this point and seeing how things have been going with respect to Poettering's systemd experiment, I think we can safely conclude that anything systemd (critical/non-critical/tangentially related/whatever) is very dangerous.

                      And that it should be avoided the same way you would avoid contact with the ebola virus.

                      What I fail to understand is how this is not 'common knowledge' by now.

                      But no.

                      Developers/distributions insist with this systemd shit.

                      So here we are.

                      WTF?

                      .

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: It's never just

                        IBM now owns RedHat. IBM makes more money from support than from hardware. Fixing shit that was designed for a prima donna's laptop but runs on nearly every server on the planet...explains why IBM bought RedHat.

                        Now why Microsoft hired LP, I'll never know, and never be abe to express without being guilty of libel.

              3. Richard 12 Silver badge
                Unhappy

                Re: systemd was responsible for injecting the vulnerability into the SSH daemon

                Signing doesn't help against this kind of supply-chain attack.

                If the source code or toolchain is poisoned, then the poison will get signed.

                Signing only (theoretically) helps detect unauthorised modification after signing.

                1. Tom7

                  Re: systemd was responsible for injecting the vulnerability into the SSH daemon

                  There was a potential opportunity for signing to help here - although the payload that got built into the library was in git, the M4 macro that deployed it into the build was not and was added to the release tarball presumably to circumvent review and delay detection.

                  So there are three ways this could have been avoided: use the code from git with a release tag, or use the tarballs generated from git on the fly by GitHub, or compare checksums between the tarball and git.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: systemd was responsible for injecting the vulnerability into the SSH daemon

                    You overlooked a fourth way. Never, ever run systemd.

              4. chroot

                Re: systemd was responsible for injecting the vulnerability into the SSH daemon

                Both protections that you mention don't help against source code injections.

            2. Doctor Syntax Silver badge

              Re: systemd was responsible for injecting the vulnerability into the SSH daemon

              As far as I can follow the discussion elsewhere the malware is introduced in a build-time script in the source tarball and, subject to various constraints might or might not be incorporated in the actual built library binary. It's crafted to attack SSH only but even that, again, if I've followed the discussion correctly, still depends on systemd's use of the SSH server. So MacOS might or might not have the relevant code into the built version it's unlikely to become a problem without systemd - unless launchd works in the same way in this respect.

              At least some of the discussion hinges on this appearing to have been what used to be called a long firm fraud with a state actor as the likeliest perpetrator so the question arises as to how long and what else might they have got into, possibly using other handles.

            3. Anonymous Coward
              Anonymous Coward

              Re: systemd was responsible for injecting the vulnerability into the SSH daemon

              Your enthusiasm is commendable, but from your posts in this thread, and others, you should really heed the advice of your elders.

              Hopefully you'll grow out of your insane confidence whilst still being wrong, or you'll never learn.

              The dogma is blinding you.

              I mean this in a good way. Take care.

          2. Lipdorn

            Re: systemd is the metastasising cancer

            "For instance, WTF does an init tool have to have any sort of dependency on any shared compression library?"

            Compression of old logs?

            1. jake Silver badge

              Re: systemd is the metastasising cancer

              "Compression of old logs?"

              That's a maintenance issue, not an init issue.

    7. Doctor Syntax Silver badge

      Re: SytemD?

      "What on earth is the connection between that and xz?"

      You could ask the same about almost anything, not just xz.

      Current Devuan and hence current Debian, give or take any infection Debian bight have acquired from systemd, is 5.4 so they're clear.

    8. chroot

      Re: SytemD?

      From the Archlinux website:

      Regarding sshd authentication bypass/code execution

      From the upstream report (one):

      openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

      Arch does not directly link openssh to liblzma, and thus this attack vector is not possible. You can confirm this by issuing the following command:

      ldd "$(command …

  3. ldo

    It Was In Debian Unstable

    Learned about the vuln this morning. Checked my system, I had version “5.6.0-0.2” of xz-utils installed. Checked for an update, there was already one. The version I now have installed called itself “5.6.1+really5.4.5-1”. That’s Debian for you ...

    1. RedGreen925 Bronze badge

      Re: It Was In Debian Unstable

      "Learned about the vuln this morning. Checked my system, I had version “5.6.0-0.2” of xz-utils installed. Checked for an update, there was already one. The version I now have installed called itself “5.6.1+really5.4.5-1”. That’s Debian for you ..."

      Same here though this article was the only one of all the other supposed "news" articles to give any actual information of use. Such as naming the library to check for and versions affected, as is the standard for journalism these days fluff articles that pretend to give information of use. Oh the Kubuntu 24.04 development branch which updated over 1100 packages today on my machine had liblzma5 at version "Installed: 5.4.5-0.3". On the plus side it looks like the transition to the 64 bit time version fix has completed going by all the t64 on the end of all those libraries names that were installed in the update.

    2. NATTtrash
      Boffin

      Re: It Was In Debian Unstable --- *buntu LTS

      "Learned about the vuln this morning. Checked my system, I had version “5.6.0-0.2” of xz-utils installed."

      Looks like the LTS' of this world might have escaped this? My *buntu 22.04 LTS seems to prove that a bit of constipation might actually not be a bad thing...

      nat@practice241:~$ dpkg -l *xz*

      +++-==============-===========================-============-======================================

      ii pxz 4.999.99~beta5+gitfcfea93-2 amd64 parallel LZMA compressor using liblzma

      un xz-lzma <none> <none> (no description available)

      ii xz-utils 5.2.5-2ubuntu1 amd64 XZ-format compression utilities

      nat@practice241:~$ dpkg -l *liblzma*

      +++-==============-==============-============-=================================

      un liblzma2 <none> <none> (no description available)

      ii liblzma5:amd64 5.2.5-2ubuntu1 amd64 XZ-format compression library

      ii liblzma5:i386 5.2.5-2ubuntu1 i386 XZ-format compression library

      Then again, being old and grumpy does force me to mumble "I told you so" on the systemd philosophy and creep...

      Suppose Ken and Dennis were on about something that was lost in (German) translation...

      1. Richard 12 Silver badge

        Re: It Was In Debian Unstable --- *buntu LTS

        That is the point of LTS, Stable and Testing.

        This got caught in Testing, so the second line of defence held. The system does appear to work.

        1. NATTtrash

          Re: It Was In Debian Unstable --- *buntu LTS

          The missus was testing the upcoming 24.04. That also seems to be spared, since it is (now) on 5.4.5-0.3

          1. WonkoTheSane
            Linux

            Re: It Was In Debian Unstable --- *buntu LTS

            Indeed.

            Having just checked my own install, it's currently sitting on around 270 updates that can't be installed, while the devs rebuild all the packages that might've been contaminated by this backdoor.

        2. Glen Turner 666

          Re: It Was In Debian Unstable --- *buntu LTS

          The "system does appear to work" is too much. It was the merest fluke this issue was found, relying on the availability of one Microsoft employee to do their micro benchmarking of Postgres performance.

          The system certainly did not work in getting the Xz maintainer the help they needed. This after the measures taken for the same issue in OpenSSL. So that part of "the system* also failed.

          There is a lot of micro failures. Hand waving commit messages. Procedural code in build systems. An integrity gap between a git repo and a tarball. Binary test files.

          That's without the whole issue of how to monitor the behaviour of developers. This wasn't the first subversion they had tried.

          There is a lot to take away from this incident. That things can keep working as they are is not one of them.

      2. Doctor Syntax Silver badge

        Re: It Was In Debian Unstable --- *buntu LTS

        Ken and Dennis have/had (AFAIK Ken is still with us) many good things to say. What a pity so many didn't listen.

    3. William Towle
      Happy

      Re: It Was In Debian Unstable

      > The version I now have installed called itself “5.6.1+really5.4.5-1”. That’s Debian for you ...

      Nod - I've just been reading about building packages* and found the explanation for that under the Special Version Conventions section of the Debian Policy manual.

      (* I have an OEM system for which their apt repository [and sadly also sources] has gone away, and when it wouldn't just boot in QEmu wanted to transfer the unique binaries off with as much package dependency information as could also be recovered. This turned out to be straightforward, even when both packages and their dependencies were newer (and/or modified) compared to my target system - I found 'apt' usefully informative when it couldn't fill in the gaps with stock versions)

  4. elsergiovolador Silver badge

    Hint

    xz

    1. m4r35n357 Silver badge

      Re: Hint

      Not quite so pithy in the UK . . . assuming I got the "joke".

  5. Anonymous Coward
    Anonymous Coward

    Back door?

    More like a panoramic, err, window!

    1. Anonymous Coward
      Anonymous Coward

      Re: Back door?

      ... like a panoramic, err, window!

      I see what you did there ...

      .

  6. ldo

    More Details

    Here’s my impression of the situation, based on looking at my own Debian Unstable system.

    I can see two binary packages built from the xz-utils source code: xz-utils and libzma5. The vulnerability found its way into the latter. My list shows about 180 different packages that depend on liblzma5, including 3 that are parts of systemd. The OpenSSH server package in turn depends on one of those systemd ones, and this is how the vulnerability was found, due to its causing odd behaviour in the SSH server. If it weren’t for that, nobody would have (yet) noticed the back door.

    1. bazza Silver badge

      Re: More Details

      Doesn't this indicate that there's probably a crisis in security at the moment? It's almost inconceivable that this is the first ever attempt at dependency poisoning. How many others have been perpetrated unnoticed?

      The way in which the Linux world is divided into myriad different projects doesn't help. Some projects are claiming to be the best thing for system security since the invention of sliced bread (cough systemD cough). But they may also pass the buck on the security of those dependencies whilst they also mandate use of minimum version numbers of those dependencies. Did they vet those versions carefully as part of their claim to bring security to systems?

      The Linux and OSS environment is ripe for more patient attackers to get a foothold on all systems.

      Build Systems Are Not Helping, and Developers Have Been Hypocritical

      The build systems these days seem to be a major part of the problem. The whole autotools / M4 macros build system is hidesouly awful, and that seems to have played a big part in aiding obfuscation in this case. There is enthusiasm for cmake, yet that too seems littered with a lot of complexity.

      Clearly something is very, very wrong when tools like Visual Studio Code consider it necessary to warn that merely opening a subdirectory and doing "normal" build things can potentially compromise the security of your system. It really shouldn't be like that.

      One always needs some sort of "program" to convert a collection of source code into an executable, and in principal that program is always a potential threat. However, the development world has totally and utterly ignored the lessons learned by other purveyors of execution environments despite having often been critical of them. Javascript engine developers have had to work very hard to prevent escapes to arbitrary code execution. Adobe Reader was famously and repeatedly breached until they got some proper sandbox tech. Flash Player was a catastrophe execution environment to the end. And, so on. Yet the way that OSS build systems work these days basically invites, nay, demands arbitrary code execution as part of the software build process.

      Unless build systems retreat towards being nothing other than a list of source code files to compile and link in an exactly specified and independently obtainable IDE / build environment, attacks on developers / the development process are going to succeed more and more. These attacks are clearly aided by the division of responsibility between multiple project teams.

      Secure systems start, and can end, with secure development, and no one seems to be attending to that at the moment. Rather, the opposite.

      How About This For An Idea?

      One very obvious thing about how OSS source is distributed and built is that they all conflate "development build" with "distribution build".

      When developing code, it's generally convenient to break it up into separate files, to use various other tools to generate / process source code (things like protoc, or the C preprocessor). Building that code involves a lengthy script relying on a variety of tools to process all those files. Anyway, after much pizza and late nights, the developer(s) generously upload their entire code base to some repo for the enjoyment / benefit of others.

      And what that looks like is simply their colleciton of source files and build scripts, some of which no doubt call out to other repos of other stuff or includes submodules. So what you get as a distributee is a large colleciton of files, and scripts you have to review or trust that you have got to run to reproduce the executable on your system.

      <u>Single File</u>

      However, in principal, there is absolutely no fundamental reason why a distributee needs to get the same colleciton of files and scripts as the developer was using during development. If all they're going to do is build and run it, none of that structure / scriptage is of any use to the distributee. It's very commonly a pain in the rear end to deal with.

      Instead, distribution could be of a single file. For example, any C project can be processed down to a single source code file devoid even of preprocessor statements. Building a project from that certainly doesn't need a script, you'd just run gcc (or whatever) against it. You'd also need to install any library dependencies, but that's not hard (it's just a list).

      In short, the distributee could fetch code and build it knowing that they only having to trust the developer when they run the code (assuming the lack of an exploit in gcc...). And if you are the kind of distributee that is going to review code before trusting it, you don't have to reverse engineer a myriad complex build scripts and work out what they're actually going to do on your particular system.

      If you want to do your own development on the code, fine, go get the whole file tree as we currently do.

      <u>How?</u>

      Achieving this could be quite simple. A project often also releases binaries of their code, perhaps even for different systems. It'd not be so hard to release the intermediate, fully pre-processed and generated source as a single file too. It'd be a piece of cake for your average CI/CD system to output a single source file for many different systems, certainly so if those systems were broadly similar (e.g. Linux distros).

      <u>Benefits</u>

      Developers could use whatever build systems they wanted, and all their distributees would need is gcc (or language / platform relevant equivalent) and the single source file right for their system.

      It also strikes me that getting rid of that build complexity would make it more likely that distributees would review what's changed between versions, if there's just one signle file to look at and no build system to comprehend. Most changes in software are modest, incremental, without major structural changes, and a tool like meld or Beyond Compare would make it easy to spot what has actually been changed. It'd probably also help code review within a development project.

      I suspect that the substitutions made in this attack would have stood out like a sore thumb, with this distribution approach. Indeed, if a version change was supposed to be minor but the structure / content of the merged source code file had radically changed, one might get suspicious...

      1. abend0c4 Silver badge

        Re: More Details

        There does seem to be a lot going on of potential concern.

        Firstly, this trojan wasn't in the public repository, but in an 'upstream tarball'. These tarballs apparently exist separately because they can't automatically be generated from the repository. It sounds like a loophole that a package that is materially different from its public version should be accepted automatically.

        Secondly, the dynamic loader seems to offer all sorts of opportunity for mischief, allowing libraries to almost arbitrarily patch code in other modules merely as a result of being linked. This may be convenient for all sorts of reasons, but it doesn't seem ideal in a world where full trust cannot be assumed.

        Thirdly, in consequence, there would seem to be merit in having discrete, minimalistic libraries for widely-used functions (such as communicating with systemd activation) to reduce the potential attack surface at the cost of some proliferation in libraries.

        However, what it all comes down to is that we're still basically using a model of computing originating in the 1960s whose principal concern was to separate one user's code from the system and possibly a few others. We now have thousands of people's code running on our computers, mostly unsegregated and often without any active verification. Yet when anything goes wrong, we're still often simply blaming the victims or deflecting onto tribal wars over systemd or "Windoze".

        1. Doctor Syntax Silver badge

          Re: More Details

          "minimalistic libraries for widely-used functions (such as communicating with systemd activation)"

          Mimimalistic ... systemd? Does not compute.

          1. abend0c4 Silver badge

            Re: More Details

            QED

          2. ldo

            Re: Mimimalistic ... systemd?

            The notification protocol being used in this case is sufficiently minimalistic that it could be implemented in a little bit of C code, without having to link in the whole of libsystemd.

            1. Anonymous Coward
              Anonymous Coward

              Re: Mimimalistic ... systemd?

              But it wasn't.

              And that's one of the key failures. The inconvenient truth is somebody decided it was a good idea to depend on a heap of unnecessary (unmaintainable?) bloatware which opened up a vast, undocumented attack surface instead of writing a few lines of presumably self-contained C code.

      2. yetanotheraoc Silver badge

        Re: More Details

        "in principal, there is absolutely no fundamental reason why a distributee needs to get the same colleciton of files and scripts as the developer was using during development"

        The Free Software Foundation would like a word.

        The source you are required to provide must provably result in the same binary, so your fancy distribution version ends up just being something else needing verification.

        1. bazza Silver badge

          Re: More Details

          The FSF? Get lost. I think you'll find that the developer(s) and therefore the copyright holder(s) of a package are entitled to distribute their source code (which they own) for their package in any shape or form they so wish, thank you very much. They can also choose any license they wish to apply to their source code, and indeed they can choose to keep it closed source too. Fine and mighty though the FSF may be, it has nothing to do with them.

          Regardless, that thing about "provably results in the same binary" is nonsense also. On the basis that you're referring to the scenario where a binary built from code licensed with GPL2/3 (or similar) has been distributed by someone other than the copyright holder who has also received a request for the source under the provisions of that license, unless extra-special care is taken with exact build system and dependency setup you do not end up with the same binary anyway even though what you do end up with may well be functionally identical. Some languages like C# even specifically require that the same source built twice results in a different binary.

          Given that the only obvious route to prove one has built the same binary (a bit-wise comparison) is effectively not available, one is left with only the assumption that the included build system did its job as anticipated by recipients. Furtermore, if someone hasn't received a binary in the first place, then there's no possible means of proof anyway; there's nothing to compare their own build against. The very point of this article is that that assumption was wrong.

          1. that one in the corner Silver badge

            Re: More Details

            > Some languages like C# even specifically require that the same source built twice results in a different binary

            I knew there were good reasons to dislike C#.

            With or without invoking C#, chucking reproducibility out of the window is another step on the path to madness.

            > unless extra-special care is taken with exact build system and dependency setup

            That should not NEED to be "extra-special" in any way, shape or form. You can certainly choose to move your build to another toolset - e.g. you want to use clang instead of VisualC - but any proper build system should take that in its stride and let you run both toolsets.

            Reproducible builds - whether of Open Source or proprietary code - was demanded of long-term (10 to 25 year product life) contracts and a requirement of having the build accepted into escrow.

            Anyone denigrating the idea of reproducibility is a danger to be avoided like The Plague.

            Actually, we have treatments for The Plague now, so...

            1. Richard 12 Silver badge

              Re: More Details

              Clang and GCC are easily configured to be reproducible. Not sure if it's the default, but it's fairly simple to do and most large Linux and macOS projects have done so.

              There's some features that clearly aren't compatible - eg random compile-time hash seeds - but those are easily made consistent.

              MSVC is fairly difficult to make reproducible builds because the Microsoft PE format specifies that one of the important fields is a timestamp - and it has to be unique.

              However, it can be done - Microsoft changed all their builds to be reproducible a few years ago - and caused some "interesting" bugs with the debugger by misconfiguring what to put into the timestamp field. That's fixed now. Not sure if they've published any official "how to" guides, but there are several unofficial guides.

            2. bazza Silver badge

              Re: More Details

              From "that one in the corner",

              "That should not NEED to be "extra-special" in any way, shape or form. You can certainly choose to move your build to another toolset - e.g. you want to use clang instead of VisualC - but any proper build system should take that in its stride and let you run both toolsets."

              It's comparatively easy to get a repeatable build, on the same box unaltered. That's not really the point. And it's definitely not the point if a developer did their dev and test build on x64 and you're rebuilding for ARM (every single byte of the binary will be different regardles).

              To recreate the exact binary that the developer built themselves means understanding literally everything about their dev environment; OS, libraries, compiler, exact versions of everything. Moreover, this configuration data would have a short life time. Before too long, something somewhere in the distro is going to have been updated having an impact on the relevant parts of the set up. It's really hard to reproduce the exact same net binary that someone else got from the same source at a slightly different time.

              I say "net" binary, because what matters (so far as knowing for sure what is being run) is the binary that has been built and the libraries that are dynamically linked at run time. This is exactly the problem that has been encountered in this case with liblzma.

              Of course, everyone knows this. That's why people create test suites too. Repeatable behaviour is about all one can hope for.

              I know some projects have long lifetimes. But anyone insisting on being able to reproduce a binary byte-for-byte the same 25 years later is also accepting that they're missing out on 25 years of security fixes and improvements in tools, dependencies, etc. That suits some systems (who have likely also done a lifetime hardware buy), but not others.

              It's also next to impossible to achieve. For example, 25 years ago the dev platform of choice for a lot of military system software was Sparc Solaris (cross compiling, for example, for VxWorks / PowerPC). You want to rebuild that software byte-for-byte exactly the same today, you've been scouring eBay for hardware for 15 or so years to retain the capability and you've been on your own so far as support from Oracle is concerned for as long. And you probably should not connect any of it to an Internet connected network.

              Suppliers of system hardware these days endeavour to make mid-life updates as painless as possible as the more viable alternative (effectively forced into doing so through the DoD mandating POSIX / VME, and subsequent open standards), though it is not unprecedented for ancient silicon to be remanufactured (on the basis that yesteryear's cutting edge $billion silicon process is now pretty cheap to get made).

          2. ldo

            Re: entitled to distribute their source code ... in any shape or form they so wish

            Yes, but then they don’t get to call it “Free Software”.

            1. bazza Silver badge

              Re: entitled to distribute their source code ... in any shape or form they so wish

              "Free Software" sounds like a daft name for a project.

              1. ldo

                Re: "Free Software" sounds like

                As RMS is fond of saying, that”s “free” as in “freedom”.

  7. Khaptain Silver badge

    What about the culprit

    What should be getting discussed here is the method of injection and the possible 3 letter agency in the background.

    1. Tom7

      Re: What about the culprit

      It appears to be one of the maintainers of xz who committed the backdoor and had since gone in to bat pretty hard claiming it's an unrelated GCC bug while working around the symptoms caused by the backdoor.

      1. Androgynous Cupboard Silver badge

        Re: What about the culprit

        This one? https://github.com/tukaani-project/xz

        This repository has been disabled. Access to this repository has been disabled by GitHub Staff due to a violation of GitHub's terms of service. If you are the owner of the repository, you may reach out to GitHub Support for more information.

        1. Anonymous Coward
          Anonymous Coward

          Re: What about the culprit

          So, does this mean that noting that depends on xz can be built at this time?

          1. Doctor Syntax Silver badge

            Re: What about the culprit

            Good question, but no. It just means you can't download it from the original Github repository. You could, for instance, use the Debian source packages for it. Other source packages are available, just choose carefully.

      2. Roland6 Silver badge
    2. Paul Crawford Silver badge

      Re: What about the culprit

      I don't know if China has any TLA, probably because one character does the job of many letters...

    3. bazza Silver badge

      Re: What about the culprit

      I'm not entirely sure that the word "culprit" can really apply anyway. If the source code alteration was by a legitemate owner of the source code, and they weren't making any particular promises and weren't particularly hiding anything, the result is a long way away from being "criminal".

      Admittedly, doing sneaky things with an overly complex build system to produce a dangerous result for anyone happening to make use of the library in a process with a lot of sway over system security makes them pretty culprity, and probably not a friend. But at the end of the day it's caveat emptor; there be dragons in them thar repos, a fact that doesn't seem to result in there being many dragon spotters. And obviously if someone has illicitly gained access to the source code, that's straight up computer-misuse illegality.

      Having said that, the going rate is that more security flaws exist because of incompentent, careless or unwittingly flawed development rather than deliberate sneaky modifications (or at least, so I hope). Why, whilst this is all going on there's another article here on El Reg here about a root privilege escalation flaw in Linux versions 5.14 and 6.6.14. Going to the effort of sneaking attack code into a repo is probably harder and slower work than waiting for a zero day flaw to come along and jumping on it...

      1. phuzz Silver badge

        Re: What about the culprit

        xz is maintained by one person, whilst being used practically everywhere. A person naming themselves 'Jia Tan' started contributing useful patches, and generally helping out, so the maintainer gradually started trusting them. At some point, whoever was operating the 'Jia Tan' account, slipped this backdoor in as an unrelated change, and it got passed downstream to distros.

        So there is definitely a culprit, and it's the person, or persons, behind the Jai Tan account, who deliberately preyed on the overworked maintainer to insert malicious code. Whoever they are, they're clearly skilled at social engineering. There's no way to tell they're a nation-state or just an amateur, but 'Jai Tan' almost certainly isn't their real name.

        1. Michael Wojcik Silver badge

          Re: What about the culprit

          That doesn't contradict bazza's point. When you rely on third-party components, you are responsible for their provenance. Whatever the probability of a malicious authorized maintainer is for a given package is your likelihood of risk for using that package.

          Public code repositories are toxic. Low bus factors are toxic, and particularly one-guy-in-Nebraska situations. Huge graphs of transitive dependencies on open-source components are toxic. Contemporary software development is mostly addicted to these toxic practices, but that doesn't make them any less toxic.

  8. Czrly

    Systemd should be in the headline, not `xz` or `liblzma`.

    When I read through the email (https://www.openwall.com/lists/oss-security/2024/03/29/4) in full, it seems apparent that `xz` and `liblzma` play roles only as the attack vectors through which to compromise `sshd` via the vast attack surface that is systemd and `libsystemd`.

    This news should really be about how distributions should not be patching trusted sources, init-systems should not be requiring such patches and shouldn't be so bloated in the first place!

    1. Debian patches the sources of everyone's most trusted, most critical daemon – `sshd` – to add support for notifying systemd …

    2. which exposes everyone's most trusted, most critical daemon – `sshd` – to an attack surface broadened to nothing less than the entire set of libraries linked by `libsystemd` …

    3. which, due to bloat and feature-creep, is vast …

    4. and `xz` and `liblzma` just happen to constitute vulnerable libraries within it, those salient today.

    It could have been anything else; the wider the attack surface, the more vulnerable everyone is.

    Every distribution is now frantically and reactively patching but the real vulnerability persists – systemd, itself – and every news item mentioning it is either bad news or notice of how its feature-creep progresses apace. As long as *that* attack-surface continues to exist on modern Linux, backdoors such as this one will only become easier and more frequent whether they are detected and reported or not.

    1. Androgynous Cupboard Silver badge

      Re: Systemd should be in the headline, not `xz` or `liblzma`.

      I'm no fan of systemd but the email also notes that "To reproduce outside of systemd, the server can be started with a clear environment, setting only the required variable: env -i LANG=en_US.UTF-8 /usr/sbin/sshd -D"

      Yes, systemd is an ideal attack vector but xz is used in a lot of places.

      I have to say this looks like quite a carefully planned and extremely cleverly executed attack so hats off off to the guy. I haven't followed it all but there are binary blobs committed as test files here - test vectors, just the kind of thing you'd expect as part of the tests for a compression library. Then there's some work done later to somehow munge those into the library, a process I haven't entirely followed but which involves the M4 macro library, used by autoconf.

      There are a bunch of fingers to be pointed here. Sure, systemd - why not? - but speaking for myself, the last time I fully understood the build process for anything written in C was about the mid nineties. Complexity is the enemy here, and there's plenty of it about.

      1. Paul Crawford Silver badge

        Re: Systemd should be in the headline, not `xz` or `liblzma`.

        Complexity is the enemy here, and there's plenty of it about.

        <= THIS and with large metal knobs on!

      2. Czrly

        Re: Systemd should be in the headline, not `xz` or `liblzma`.

        You can only reproduce it if the `sshd` executable was built from compromised sources in the first place. Gentoo – for example – write the following in their advisory notice:

        > 2. the backdoor as it is currently understood targets OpenSSH patched to work with systemd-notify support. Gentoo does not support or include these patches

        > https://security.gentoo.org/glsa/202403-04

        I do not think that systemd is at fault for this particular exploit, in this instance, but rather at fault because it has created the channel through which exploits like this cannot fail to occur. It has normalised the very concept of an overly complex, bloated init.

      3. Graham Cobb Silver badge

        Re: Systemd should be in the headline, not `xz` or `liblzma`.

        There are many design decisions of Systemd that I don't like. But there really isn't any point blaming it for this.

        Systemd has a feature which some developers find useful: an app being started can notify Systemd that it has now successfully started up instead of Systemd just starting it and hoping for the best. Pretty obviously that could be a useful feature for some. Debian decided to use that feature, although I think that decision is now likely to get changed to revert to the "fire and forget" behaviour that other init systems do (and is the default with Systemd also).

        There are two ways to send the notification: it is a simple one-line write of text to a socket and is easy to hand code. Or you can call a function in the Systemd library which does the write for you. The mistake, in this case, was to use the library: that brought in loads of other dependencies (like liblzma and xz) that are used by other parts of the library. With hindsight, a security-critical app like ssh should have avoided loading a very highly featured, general purpose library like Systemd when it really didn't need it.

        Blame xz's developers. Blame Debian for adding unnecessary features to one of the single most security-critical apps on the system, or using the easy option of linking in a massive library where a single write would do.

        You can blame Systemd for a lot of crap but I don't think it is at fault here.

        What I wonder is whether the problem would have been avoided if ssh had statically linked the library? It is probably time that all security-critical apps were audited for whether they bring in unnecessary code. Of course the tradeoff is that they wouldn't get the benefit of bug fixes in the routines they statically linked. Swings and roundabouts.

        1. _andrew

          Re: Systemd should be in the headline, not `xz` or `liblzma`.

          There has always been a perfectly serviceable mechanism for services (daemons) to notify the system of a failure to start properly: exit codes. That's literally what they're for. You can also throw in some logging to syslog on the way out, if you want to get fancy.

      4. Anonymous Coward
        Anonymous Coward

        Re: Systemd should be in the headline, not `xz` or `liblzma`.

        Yes, systemd is an ideal attack vector but ...

        More than enough to keep that crap out of the OS.

        Quite clearly, in such a situation no buts apply.

        .

    2. ldo

      Re: Haters Should Be In The Headline, Not systemd

      Just a note on why distros like Debian include this patch to their SSH server: it’s to provide a notification back to the service manager that the service has successfully initialized.

      On an old-style sysvinit system, if “/etc/init.d/sshd start” returns without error, all that indicates is that the process has started. Whereas modern service managers (not just systemd) include mechanisms so the service process can send a notification back after it has succesfully completed (potentially time-consuming) initialization. Otherwise the only way to be sure the service has started properly is to check logs, or (in the case of SSH) try a remote login to confirm that works.

      There is some discussion over whether the SSH server needs to link against the whole of libsystemd just to call this notification routine: the protocol is simple enough that it could be reimplemented in maybe a couple dozen lines of C code.

      1. R Soul Silver badge

        Re: Haters Should Be In The Headline, Not systemd

        That's the wrong discussion to be having. The conversation that needs to take place is how to ensure sshd and other system services get ring-fenced from systemd.

        Another worthwhile conversation to have is why libsystemd exists and why anything *has* to use it. BTW, "because it's there" is not a valid answer.

        1. ldo

          Re: That's the wrong discussion to be having.

          By all means, have your own discussion. Nobody is forcing you to be part of this one. Don’t want to use systemd? You know what to do.

        2. Androgynous Cupboard Silver badge

          Re: Haters Should Be In The Headline, Not systemd

          It's not the wrong discussion at all, it goes to the heart of one of the problems with modern software development - over-reliance on dependencies.

          This is what happened with Log4J, it's what happens with this issue and it's potentially what happens every time your code has a dependency on another package - you are trusting a third party.

          For simple code like "send a message to another process, write it yourself! I genuinely despair that I am having to make such a self-evident point. This is is supply chain poisoning. Reduce your fucking supply, and you might just become a better programmer as well has having software you can audit.

        3. Graham Cobb Silver badge

          Re: Haters Should Be In The Headline, Not systemd

          ssh doesn't have to use it. As other posts also mention, there is no requirement to use it - it seemed like a useful and neat feature to send the "yes I've started" notification and it seemed like the easiest (and probably most robust) option to use the Systemd library to do it. With hindsight, I am sure one or both of those decisions will be reversed. But that won't require changing the status or policies around Systemd.

          1. _andrew

            Re: Haters Should Be In The Headline, Not systemd

            Not exiting with an error code (and error log) used to be the perfectly acceptable record of success. It's what happens on other (non-systemd) systems. Why would you want a daemon that started, failed to initialize as instructed in its config file (or otherwise) and what, just hung around? Exit on failure (with a documented error code) is a fine protocol.

            1. ldo

              Re: Not exiting with an error code (and error log) used to be the perfectly acceptable record

              Not user how you are supposed to notice that a daemon has failed to initialize properly, if the sysvinit script only reports that the process was created OK. This is what the systemd protocol fixes.

              1. Anonymous Coward
                Anonymous Coward

                Re: Not exiting with an error code (and error log) used to be the perfectly acceptable record

                there's so much wrong here in so few words.

                if some sysvinit script fails to report a startup error, just fix that script! daemons that fail usually return an exit code. that can be picked up and acted on by its parent process. systemd's bloatware and dependency crud simply isn't needed for any of that.

                daemons that fail to initialize should be a rare event. though i suppose they may be commonplace in linux world's twisty 3d mazes of dependency hell and bloatware ugliness. if non-initializing daemons don't happen all that often, what is the "problem" that systemd supposedly "fixes? the cure is probably worse than the disease. if systemd is the answer, the cost/benefit analysis is seriously defective.

                daemons generally fail at start-up for reasons that should never happen on production systems: config file errors, access permissions, missing files, etc. these are supposed to get checked and fixed in pre-op testing.

                1. ldo

                  Re: if some sysvinit script fails to report a startup error, just fix that script!

                  You do know, don’t you, that sysvinit scripts only know how to spawn processes, there is no standard protocol for confirming successful startup. That’s the missing piece provided by more modern service managers, like systemd.

                  1. collinsl Silver badge

                    Re: if some sysvinit script fails to report a startup error, just fix that script!

                    Then how do the RHEL6 systems I use manage to put [ERROR] on the output of "service xxx start" when a process fails to start up?

                    1. ldo

                      Re: when a process fails to start up?

                      That’s the point: the process may be created, but it doesn’t manage to initialize properly after starting up.

        4. Czrly

          Re: Haters Should Be In The Headline, Not systemd

          I imagine that the up-stream OpenSSH developers do consider unadulterated `sshd` to be perfectly well ring-fenced from attacks against systemd, or `xz`/`liblzma` or – more generally – from the attack surface of essentially unfunded libraries with at-most-one trustworthy maintainer. That's why they don't link those libraries!

          The UNIX Principle is what we *need* to be discussing but – frankly – what's the use? It has been long abandoned. Meanwhile, call me a "hater" because, yes, I do hate the very concept of a Linux box that runs an init that scorns the UNIX Principle so extremely that a daemon likely to run as `root` must necessarily be compromised *at build time* for compatibility.

          OpenSSH should never need to know of the existence or use of whatever is chosen for init or whatever initiates it as a daemon, let alone be critically compromised via a supply-chain attack targetting that initiator or libraries that may or may not be linked to that.

          If, indeed, `libsystemd` is not safe for use then it should not exist at all.

          1. jake Silver badge

            Re: Haters Should Be In The Headline, Not systemd

            "It has been long abandoned."

            Only by the clueless.

    3. phuzz Silver badge
      Devil

      Re: Systemd should be in the headline, not `xz` or `liblzma`.

      I'll laugh if the reason sshd was being slow, was because this backdoor in xz was fighting for resources with an as-yet unfound backdoor in a different linked library.

  9. yetanotheraoc Silver badge

    yet another vulnerability database

    table0: security-critical packages

    table1: world+dog packages

    erd: how package1 affects / is affected by package0

    case: (not in table1) you better be running in a sandbox; (depends on table0) be patching; (table0 depends on you) best refactor otherwise get out the fine-toothed comb; (mismatch between stated dependency and actual code) man the torpedoes!

    1. Michael Wojcik Silver badge

      Re: yet another vulnerability database

      This was, more or less, why the concept of the Trusted Computing Base was introduced. The problem is that actually isolating a TCB 1) is a lot of work and 2) removes a bunch of convenience. (That is, after all, the whole point of security features: to make some things more difficult. The hope is that legitimate users pay a small cost now, to avoid a much larger cost later.)

      Contemporary end-user computing has discarded a ton of security-engineering ideas from previous decades in the name of convenience. The SAK mechanism is another good example.

  10. yetanotheraoc Silver badge
    Thumb Up

    Appreciation

    Hat tip to the Vulture(s) for posting this story so late into the holiday weekend.

  11. Roland6 Silver badge

    Securing Open Source

    Funny how just recently we were discussing this,

    https://www.theregister.com/2024/03/08/securing_opensource_software_whose_job/

    A concern has to be, how to turn this seeming negative publicity into positive publicity: we know about this because it’s open source, if it were closed source/proprietary things are less certain.

    1. Anonymous Coward
      Anonymous Coward

      Re: Securing Open Source

      Not sure why the downvotes, that’s a reasonable point. Yes it sucks but it’s five weeks from first checkin to issue being identified, analysed and fixed.

      1. R Soul Silver badge

        Re: Securing Open Source

        Depends on your definition of "fixed".

        The gaping security holes in systemd are still there, ready to be exploited by the next dodgy shared library or whatever that comes along. It's not known yet what measures the maintainers of xz or other open source tools have put in place to protect their software from bad actors who try to introduce malware.

        We were fortunate the liblzma exploit was quickly found and addressed, partly because xz is a small project that doesn't change often and has a handful of developers. If the obfuscated malware had been sneaked into (say) gcc, we would not have been so lucky - more so if it had beed added incrementally over the course of a few years and new releases.

        1. Roland6 Silver badge

          Re: Securing Open Source

          > Depends on your definition of "fixed".

          The same one as used by all other producers of software…

          > The gaping security holes in systemd are still there

          Just as they are in other software, hence the regular stream of security fixes…

          > It's not known yet what measures the maintainers of xz or other open source tools have put in place to protect their software from bad actors

          No we don’t , however, we do know the actions they have taken to limit and review the work of this specific bad actor.

          I suggest the measures that are necessary are all part of improving the security of the Open Source supply chain, something that probably needs to be done as an overarching activity. Perhaps, GitHub will require all developers to produce proof of realworld id. and character references. It won’t stop bad actors but might remove the casual bad actors… however, there are also some major downsides to such measures…

        2. Roland6 Silver badge

          Re: Securing Open Source

          This discovery has also raised the issue of GitHub history given it can be tampered with and manipulated…

        3. Androgynous Cupboard Silver badge

          Re: Securing Open Source

          So now we're 48 hours in and it should now be clear to everyone commenting here what happened. The xz library was attacked. While not normally a component of sshd, it may be optionally included to enable it to communicate with systemd. Our hacker targeted that combination, also requiring glibc (ie. on Linux) for the exploit.

          So there are several components required to make this work - xz compression, openssh, glibc and - yes - systemd, but only by virtue of its IPC requiring xz. Is that unreasonable? No. xz is a great algorithm and (along with zstd or brotli) an excellent choice in 2024. Systemd has many problems, but it's not the primary cause of this problem.

          Edit: ah, link above showing our likely culprit had a go at attacking zstd too.

          1. Roland6 Silver badge

            Re: Securing Open Source

            Useful timeline here, that is trying to make sense of the scale of the problem uncovered.

            https://boehs.org/node/everything-i-know-about-the-xz-backdoor

            1. Androgynous Cupboard Silver badge

              Re: Securing Open Source

              Thank you, that's very thorough and well worth a read.

  12. DS999 Silver badge
    Black Helicopters

    If this was sponsored by a nation state

    It is unlikely to be the only such backdooring attempt they've sponsored.

    1. Androgynous Cupboard Silver badge

      Re: If this was sponsored by a nation state

      I had that same thought myself. If it was the contributor it currently appears to be, then the infiltration process potentially started 18 months ago. It feels like the ground has just shifted very significantly under open-source development.

      1. Anonymous Coward
        Anonymous Coward

        Re: If this was sponsored by a nation state

        I thought the great firewall of china was supposed to prevent stuff like this from happening.

        1. Paul Crawford Silver badge

          Re: If this was sponsored by a nation state

          Only in the other direction...

  13. Roland6 Silver badge

    GitHub CoPoilot…

    This attack (and the comments arising) exposes a big problem looming with AI training data. GitHub CoPilot will have been trained on GitHub projects. So cleaning out the xz backdoor is going to require a junking of existing learning and a new training run….

    Interestingly, for those wanting to purge systems from Linux, a similar approach will be necessary..

    1. Paul Crawford Silver badge
      Trollface

      Re: GitHub CoPoilot…

      <clippy>

      Hey! It looks like you are trying to backdoor this package. Do you want some help?

      </clippy>

      1. R Soul Silver badge

        Re: GitHub CoPoilot…

        <clippy>

        Hey, it looks like you're playing with systemd! Fuck that! Even I'm not going to touch it with a shitty stick.

        </clippy>

        1. jake Silver badge

          Re: GitHub CoPoilot…

          Except Microsoft triumphantly announced systemd on WSL back in September of '22, as reported here on ElReg.

          https://www.theregister.com/2022/09/24/systemd_windows_linux_microsoft/

    2. _andrew

      Re: GitHub CoPoilot…

      A valid point in general (probably why studies show so many security vulnerabilities in chat-generated code), but in this case there's an extra wrinkle: the exploit was _not_ on github. It was only inserted into the "upstream tarball" that the packagers depended on, rather than cloning and building from source themselves.

    3. Lipdorn

      Re: GitHub CoPoilot…

      The attack was embedded in two binaries and a build script. The actual GitHub source was clean.

      1. Anonymous Coward
        Anonymous Coward

        Re: GitHub CoPoilot…

        So where was the compromised build script stored and published if not in GitHub?

  14. Adam Inistrator

    systemd is technical debt.

    1. Anonymous Coward
      Anonymous Coward

      systemd is technical debt.

      Unfortunately, it is much worse than just technical debt.

      Like another commentard said about systemd:

      "... a developer sanctioned virus running inside the OS, constantly changing and going deeper and deeper into the host with every iteration and as a result, progressively putting an end to the possibility of knowing/controlling what is going on inside your box as it becomes more and more obscure."

      .

  15. NickHolland
    Facepalm

    how many compressions systems do we need?

    allow me to toss out a point I haven't seen mentioned...why do we need xz so embedded into the system?

    gzip is pretty much standard unix these days. Don't think we could live without it. We need gzip.

    But do we really need xz that tightly embedded into the system? Modern computers have lots of disk space, lots of bandwidth, is a 20% (if that much) improvement over gzip really worth the complexity of additional compression protocols? I've been using compression utilities for 40 years...I'm thinking no.

    If you want xz or bzip or rzip or 7z or rar or pigz or... go ahead, add it. But why is the basic system thinking it needs it? I've only used xz when someone hands me a file already compressed with xz.

    Imagine if the time spent integrating xz into Linux was spent auditing gzip and other things?

    Do one thing, do it well, and make sure it is done correctly.

    1. Roland6 Silver badge

      Re: how many compressions systems do we need?

      Somewhere in all the discussion threads I’ve read these last couple of days, this question was raised and answered. Basically, the problem is around the exact context in which the compression and decompression is being performed. In the case of Linux packages, the package is effectively compressed once, massively distributed and then uncompressed. Hence in the interests of efficiency of transmission and speed of unpacking/installing, compression systems that result in smaller “tarballs” which can be quickly unpacked on lower specification end user computers are favoured over others such as gzip.

    2. Androgynous Cupboard Silver badge

      Re: how many compressions systems do we need?

      25 years ago you might have been asking the same question about LZW.

      If you’re dealing with genuinely huge amount of data, a 10-20% improvement a huge deal, as is the ability of modern algorithms to compress using multiple threads (flate is inherently single threaded). Going the other way, LZ4 is so fast you can use it in real time - I used it for in-memory compression on a project recently, for a large data structure I had to store temporarily, just to reduce heap pressure. The data was never written to disk.

      Is it necessary to send a short “process is started” message to systems? Nope. But Flate is… not exactly showing its age, but wouldn’t be the first choice for a lot of use cases.

  16. Telman

    I have a questions for the Hive Mind tm. Would this effect MX Linux? I don't believe it would, but I do not know that much about Linux... :)

  17. dcocz

    There are approximately 160k small community drinking water treatment systems in the US today. These are controlled and monitored by RTU's that offer ability over GPRS, setup and control of flow valves and chemical additives like chlorine and fluoride among other treatments. These are used daily to make sure drinking water is safe. Companies like Hach and Siemens water technologies have to make these RTU's compatible and they use the Linux network stack and SSH. In Feb/23 WHO made a statement about the use of reverse osmosis water filters suggesting they are unsafe due to ability to filter out important minerals. If want to know a good target for this hack then consider US EPA drinking water.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like