back to article Sysadmins: Why not simply verify there's no backdoor in every program you install, and thus avoid any cyber-drama?

Half of publicly reported supply chain attacks were carried out by "well known APT groups", according to an analysis by EU infosec agency ENISA, which warned such digital assaults need to drive "new protective methods." Of the 24 supply-chain attacks studied by ENISA since January 2020, a dozen were attributed to APTs while 10 …

  1. Pascal Monett Silver badge

    How about using proper change control ?

    Once an update is committed to the upload server of the supplier, there could be a mechanism to ensure that that file is properly identified (MD5 and signature, or something similar). As soon as the file changes, if there is not the proper declaration in the records, shutdown the Internet connection for the server, send an alert mail and wait for the admins to come and check.

    I'm pretty sure implenting this kind of procedure wouldn't break the bank, and it seems to me that it could be rather efficient in keeping customers safe.

    1. tip pc Silver badge

      Re: How about using proper change control ?

      “ Once an update is committed to the upload server of the supplier, there could be a mechanism to ensure that that file is properly identified (MD5 and signature, or something similar). As soon as the file changes, ”

      What if the update is fully signed etc and approved but contains the malicious payload that no one has spotted?

      As I understand things, the project owners update open source to the repositories and it’s then signed. So anyone pulling the code is sure it’s how the maintainers intended. Absolutely no use of the code was intentionally compromised to look legit and has all the correct signing etc.

      1. unimaginative
        Linux

        Re: How about using proper change control ?

        That works well with linux repositories (possibly even better with things like BSDs) who are (relatively) selective about maintainers. Unmaintained packages get dropped after a while. The package maintainers are not typically the project owners, if by the latter you mean the people who write the software.

        It will not work for things like npm and pypi who are more concerned with making it easy to package stuff, and the authors are usually the package maintainers and probably less willing to jump through hoops.

      2. elsergiovolador Silver badge

        Re: How about using proper change control ?

        Some organisations do require that every update is checked by two humans, so nothing is released until two people verify the changes and click the button for the build to go out. It's a pain, but sometimes necessary.

        Of course it is not entirely fool proof. If these people, for example, are tired and not paid adequately for their responsibility and act carelessly.

        1. eldakka

          Re: How about using proper change control ?

          > so nothing is released until two people verify the changes and click the button for the build to go out.

          But then the system that has these buttons can be targeted, so some malware can be what 'presses' these buttons.

          And even then, that doesn't prevent the issue. In the Solarwinds attack, it was the build server itself that was compromised. So even if 100 developers had vetted, in detail, an update and all approved the pull request, when that update was sent to the build server to be compiled, the build process infected the code, signed the resulting code with the organisations signing keys, and generated and published the hash and published both signed compiled code and the hash of that code.

        2. Anonymous Coward
          Anonymous Coward

          Re: How about using proper change control ?

          Who checks the checkers?

          1. J. Cook Silver badge
            Joke

            Re: How about using proper change control ?

            We'd need a checker-checker.

            And then a checker-checker-checker, etc. ad nauseum. (down that path madness lies)

      3. Rdc567

        Re: How about using proper change control ?

        1. Small sql lite database with MD5 hashes of repos.

        2. Hash database can only be updated by signed gnupg key

        3. Develop algorithm of trust rated on 1 to 5. 5 is fully trusted. 1 is unknown and use at your own risk. To achieve 5 library must have been used with no verified complaints for 12 months. Other factors taken into consideration could be: from a verified developer, library has been voted as useful by select community members, trusted security researcher has audited source code and found no backdoors.

        4. Background debugger logging and tracing file r/w and socket calls to the terminal in a safe running mode. This addition would allow users of a library to see what a library is doing in the background.

        Libraries ran with a rating below 3 will error without using a --force-non-safe-librsry switch.

        Again, nothing is 100 percent foolproof. We can only do things to make people more aware what they are running.

        We can all see where this is going: running all code in containers without access to the operating system or other components, unless granted.

        1. doublelayer Silver badge

          Re: How about using proper change control ?

          This is nearly untenable.

          "1. Small sql lite database with MD5 hashes of repos.": If you do that for long enough, that's not going to be a small file. Creating the hash requires a foolproof method of hashing everything. Updating it is not a fast operation.

          "2. Hash database can only be updated by signed gnupg key": And who is going to have said key? Because if you want your database updated regularly and you require a human to do it each time to properly sign with a securely-held key, you're going to get a burned-out human who clicks OK on every update that comes out.

          "3. Develop algorithm of trust rated on 1 to 5. 5 is fully trusted. 1 is unknown and use at your own risk.": It's open source, everything's at your own risk. But let's leave that aside and look at how one gets trusted here:

          "To achieve 5 library must have been used with no verified complaints for 12 months.": No verified complaints either means nobody uses it or nobody's tracking complaints. If you mean only security vulnerability reports, not complaints about bugs or performance, then you could go for some time without detecting those vulns. A lot of people don't check every task, so someone could be using it and not reporting the vulnerabilities which do exist.

          "Other factors taken into consideration could be: from a verified developer,": Verified by whom, how? Confirmation that the developer is the one pushing out the updated code how?

          "library has been voted as useful by select community members,": Useful does not mean secure. Lots of things that are useful also have long CVE lists.

          "trusted security researcher has audited source code and found no backdoors.": A useful one. Unfortunately also a very expensive one.

          Inserting a note here. None of these conditions are going to help when dealing with updates to code, which is often how malware gets in if it has compromised the supply chain. If you update a library to fix a bug and it's an attacker who submitted it, having compromised the dev's system (see also exactly how Solar Winds happened), the dev is the same, the software is the same, and the source is the same. Unless you're planning to force a 12-month change freeze on all updates, in which case A) any detected vulnerability will be exploitable on your system for a year and B) nobody will be testing it because they're all doing that.

          "4. Background debugger logging and tracing file r/w and socket calls to the terminal in a safe running mode.": Sure, go ahead. Few people will do that, but the tools are out there for you to do it if you want.

          This addition would allow users of a library to see what a library is doing in the background.

          "Libraries ran with a rating below 3 will error without using a --force-non-safe-librsry switch.": How? The calling process checks the library? The same calling process which requested the library? Or maybe it's in the compiler so someone can put that switch in the build script and send the binaries out?

          "We can all see where this is going: running all code in containers without access to the operating system or other components, unless granted.": Yeah, that might be a better starting point. Insulation of different processes has proven the most effective method of limiting the damage done by a successful attack. It's also harder to do because you often need the software to interact with that kind of data, but it could be worse.

          1. Rdc567

            Re: How about using proper change control ?

            You are attacking verbage, good sir

            An insecure library or binary would not be at all useful.

            I was referring to the recent attacks via PiPy. But yes. All operating systems should have the ability to strace all read/sockets more easily.

            I guess your reply is a good example why pundits curtail security. Pointless conjuring flaws by experts who lack high technical knowledge. But have skill in sounding at an expert level in meetings just by posting pessimism on the internet for self satisfying gratification amongst true professionals

            1. doublelayer Silver badge

              Re: How about using proper change control ?

              "An insecure library or binary would not be at all useful.": Wrong there. Most operating systems have been very insecure at many points in their history. They are still useful. A library which parses a format but is prone to injection is also useful though insecure, which is why people still use such libraries (especially serializers). People who don't know about security vulnerabilities frequently use insecure code because it is useful. Devs who don't care about security because they're bad at their jobs frequently ignore vulnerabilities in things because those things are useful. Now if you meant to say secure instead of useful there, I wouldn't be arguing this point. It's not core to the problems I saw in your suggestions.

              "I was referring to the recent attacks via PiPy. But yes. All operating systems should have the ability to strace all read/sockets more easily.": Then we or you should write a better debugger. We have existing tools, and we can improve them.

              "I guess your reply is a good example why pundits curtail security. Pointless conjuring flaws by experts who lack high technical knowledge.": This would be the case if the flaws I "conjured" are wrong. I spoke of many flaws. Care to explain why they're false? Because if I conjured any real flaws, they might either need remediation or at least acknowledgement before you forge ahead.

              "But have skill in sounding at an expert level in meetings just by posting pessimism on the internet for self satisfying gratification amongst true professionals": Well, as easy as it is to lie online, I'll say truthfully that I work in security and have plenty of pessimism from my experiences there. The problems I see are ones that could impair the usefulness of your suggested approach. You don't have to believe me, but that need not be the important issue if we stick to more definite topics. Proving that I'm wrong about the database scaling size is easier than proving I'm lying about my experience. Moreover, the accuracy or inaccuracy thereof can be proven by collecting and working with public data rather than trading possibly true but unverifiable attacks on one another.

    2. martyn.hare

      Linux proves that doesn’t work

      Ask GKH about them commits he had to revert due to them being part of a study on supply chain attacks which the Linux kernel project didn’t defend against. The truth is that businesses need to stop buying large, bulky “solutions” for every problem they have and instead actually have their talented employees sort things out properly.

      Supply chain attacks can and will happen; the goal is not to embrace every single new fad under the sun and have people who thoroughly understand the software they are working with, The Trusted Computing Base should be as small and as simple as possible to keep things secure and everything else should be running with as little authority as is feasible, with the assumption it could already be compromised.

      Also, make sure the stuff you use has been independently vetted. Don’t use npm or pip or the like and don’t trust bundled libraries, stick to vetted, system-wide libraries instead.

      1. John Deeb

        Re: Linux proves that doesn’t work

        That sounds like a dozen of large IT organizations I know and where hardly anything manages to happen any more while the competition is smoking their asses, getting away with the incidental incident. Perhaps the suggested might work for government and banking as it can be endlessly financed by other people's funds and savings. Or at least the market mechanism is less punishing there.

        1. elip

          Re: Linux proves that doesn’t work

          Yet, I used to work for an organization just like this, and validated 3rd party code as part of my sysadmin duties *not* because somebody specifically asked me to do it, but at that time, it was just a standard part of the day job in my opinion. It was simply the right thing to do. Turns out society and our customers just wanted cheaper and cheaper shit, and eventually they got it. Enjoy the fallout folks!

          1. James R Grinter

            Re: Linux proves that doesn’t work

            Back in the day, we systems programmers built almost every third-party tool ourselves from source. Even the C compiler. But we sure as hell weren’t reading and comprehending all the source, not even checking for common sources of mistakes (printf without format args, popen calls, etc), and you’ve only got to recall Ritchie’s seminal paper Reflections on Trusting Trust to see the elephant in the (GCC) room.

            Software is necessarily more complex these days because capabilities are higher and we demand more, and that will almost always involve many more third party dependencies, which in turn may have more. That cat is not going back into the bag, because it’s just not feasible, or wise - trivial string padding routines aside - for a development team to rewrite all those themselves (crypto, maths, graphics, UI, kernel, etc).

            The solution isn’t to ditch the “uncontrolled” open source dependencies, either, and go back to commercial (commercial C++ libraries were all the rage in the 90s and early 00s), because we’ve seen with SolarWinds, Kaseya and many others that if you’re a high value target you *will* get attacked and compromised at some point for leverage into other networks. You need to have in place methods to prevent, mitigate or detect it when the time comes.

            1. dlc.usa
              Boffin

              Re: Linux proves that doesn’t work

              While gcc may be the elephant in the room, under the room lurk the true leviathans, as Thompson mentioned: in the microcode, ME technologies, etc. Some years back I read an account of installing GNU/Linux in an HDD drive's controller--gave a whole new meaning to installing something on your hard drive.

      2. unimaginative
        Linux

        Re: Linux proves that doesn’t work

        The code was reverted, and I think before it got into a release? It was removed before it got into those signed repos.

        I do use pip, but I have started using tools that tell me what the dependencies are. I also try to minimise dependencies.

        Surely the aim is to embrace every new fad! If you are a CTO and you rely on your employees to sort things out it will be your fault if it goes wrong. if you buy in a solution its the vendor's fault. The same applies to anything "cloud" especially from a vendor everyone has heard of.

        1. Sandtitz Silver badge

          Re: Linux proves that doesn’t work

          "The code was reverted, and I think before it got into a release? It was removed before it got into those signed repos."

          Nope.

          Kroah-Hartman, of the Linux Foundation [...] told The Verge that one patch from the study did make it into repositories, though he notes it didn’t end up causing any harm.

          1. unimaginative
            Linux

            Re: Linux proves that doesn’t work

            " In their paper, Lu and Wu claimed that none of their bugs had actually made it to the Linux kernel"

            so that seems to be contested.

            SO four questions:

            1. what made it to a releae

            2. what made it to a stable release

            3. what made it to any distros development/unstable/whatever repos - and which distors

            4. what made it to any distros stable (or equivalent ) repos

        2. FlamingDeath Silver badge

          Re: Linux proves that doesn’t work

          There is no cloud, there is only the license agreement.

          Software houses have never sold software, be it software in your own hardware, or their own.

          Am I the only one who realises this?

      3. veti Silver badge

        Re: Linux proves that doesn’t work

        There aren't enough "talented employees" in the world, never mind in the country, for every employer to have its own software written bespoke from scratch.

        And even if we tried it, what then? How many of the resultant systems would really be secure, and for how long?

    3. Michael Wojcik Silver badge

      Re: How about using proper change control ?

      Argh. Look at the mechanisms involved in most software supply-chain attacks. Code signing (there's no need for a separate hash, so I have no idea why you mentioned MD5 – which is no longer sufficiently collision-safe anyway) would not have prevented them.

      It's not like we don't understand how supply-chain attacks work. There's plenty of analysis readily available which would demonstrate why simple solutions aren't effective.

      I know, I know. This is the Reg forums, where everyone with an armchair is an expert.

  2. tip pc Silver badge

    Trusting the builders who trust their suppliers

    “ It would be better to have robust mechanisms in place to verify that software packages are legit, and released and fetched as intended by their maintainers, and in a way that if build systems are infiltrated, as we saw with SolarWinds, unauthorized changes are still apparent”

    If we all had the skills to verify the work supplied by vendors then surely we’d not need the vendors.

    Over the years there have been found many exploits in open source software. I remember one was traced to an update between Christmas and New Years years before hand. Most code is complex and understood by only a few people. Lots of code relies on code libraries from others that they themselves have not verified and may auto update.

    Let’s hope for a return of in house coders to build and analyse this stuff instead of farming out..

    1. a_yank_lurker

      Re: Trusting the builders who trust their suppliers

      A general purpose package available on the web does have its advantages but home grown packages have theirs also. The general purpose package has 2 major advantages; it's already coded and it's ready to use and the develop cost to use it is very low. But it's disadvantage is you lose control of that bit of code. Updates could break your code is major problem. Home grown packages have the major advantage the organization has complete control of the package and can update as their needs change. It's primary disadvantages are cost and time to develop the code.

      When should one use 'off the shelf' code vs home grown code is not always obvious. There is a grey area in the middle were careful analysis of business needs and capabilities must be done, Often this analysis is not done particularly if there is a rush to ship code. Another issue is when 'off the shelf' code should be replaced by home grown code. Complicating the issue are the zealots on both sides who fail to see the problems with their preferred solution.

      I doubt anyone is going to write something like Word or Excel for general office use. They would use some readily available package whether it's Orifice, LibreOffice, or something else. But the inventory and billing system may need to be a custom package to meet their needs.

      1. tip pc Silver badge

        Re: Trusting the builders who trust their suppliers

        "But the inventory and billing system may need to be a custom package to meet their needs."

        would you write your own ssl library for that or just use the publicly available library that has lots of people looking at it to spot and repair vulns?

        my main issue is that few actually financially contribute to those that maintain that software we need.

        What happens when some one dies / got to prison / decides they need to pay the bills so accepts Money from shady places?

  3. Andy Non Silver badge
    Pirate

    Herculean task

    Software is only getting more and more complex with more lines of code and third party components and contributors. Short of having an in house trusted group of programmers to wade through hundreds of thousands of lines of source code, I can't see this problem going away. What company is realistically going to employ such a team anyway? The problems are not just malicious code sneaking in but the endless bugs resulting in zero day exploits.

    Miscreants only need to find one exploitable bug in software but those trying to make the software safe and secure need to find and fix every exploitable bug. It just isn't going to happen.

    1. chivo243 Silver badge
      Thumb Up

      Re: Herculean task

      + 1

      I had the same thoughts...

    2. Warm Braw

      Re: Herculean task

      Miscreants only need to find one exploitable bug

      And there will likely be several in any non-trivial system.

      There is no "solution", but a lot more could be done to mitigate the scope of any exploit - at the cost of additional complexity and effort in explicitly having to grant permissions to individual process instances to access specific files, network resources and so on - with limited quotas. The whole UGO permission model dates from a time before networks when people wrote their own code and the only people who could be threatened by that code were other local users. Even though code must now be considered malicious by default, its very difficult to get people to think beyond that model - either to imagine new mechanisms or even just to use the ones that already exist.

      It's possible that DevOps might provide a mechanism to limit the configuration pain, but we are overdue a better security model because the pain of not having one is becoming intolerable.

    3. FlamingDeath Silver badge

      Re: Herculean task

      Shit man, how the hell do we manage to build safe and durable constructions. Even the pyramids are still standing ffs

      Software is a joke, its the fucking wild west with shitty cowboy builders everywhere

      Someones gotta say it

      1. Cav Bronze badge

        Re: Herculean task

        Yeah, we can build software in the same fashion as the pyramids - thousands of people working for decades. That'll really work...

        I can only assume you are not a developer. A system can have millions of lines of code, worked on by different people over years.

        I vaguely remember an Australian project to mathematically prove a software app was safe and bug free. It took something like 100 staff members 7 years to verfiy some 7,000 lines of code. Code is complex and voluminous. Unless you want to wait years between updates you have to accept some bugs.

        As for safe and durable constructions, plenty of buildings fall down.

    4. Anonymous Coward
      Anonymous Coward

      Re: Herculean task

      It is a near-impossibility to prove that something does not exist. Doesn't really matter whether it's a backdoor or $Deity.

  4. karlkarl Silver badge

    Perhaps when a developer is going to depend on a third party library (such as a javascript one to pad a string) they should first have to write a 200 page document detailing why they need it rather than writing their own.

    I think this will do very well at reducing the sheer volume of needless shite dependencies that some guys churn into their projects.

    Unfortunately this won't help open-source projects where this problem is even more rampant!

    1. a_yank_lurker

      The issue you raise is not with using an outside package but with the competence of the programmer. Competent programmers try to use the features built-in the language to solve their coding problems first then consider using an outside package. Language features generally change more slowly than outside packages.

      1. eldakka

        > Competent programmers try to use the features built-in the language to solve their coding problems first then consider using an outside package.

        But most programming languages depend on external packages to do anything useful.

        I mean, are you saying you could write a useful C program in a reasonable amount of time without using stdio which is, technically, an external package?

        In effect you might be able to write your own stdio in the program you are creating, but if you have to write from the gound up stdio every time you wrote a program, your effective useful output of business-useful code - what work actualy hires you to output - would be very close to zero.

    2. DS999 Silver badge
      Facepalm

      So your solution

      Is to have programmers everywhere re-invent the wheel over and over again, by making third party code incredibly painful to use?

      I look forward to seeing all the home grown encryption libraries, I'm sure things will be MUCH more secure!

      1. EnviableOne

        Re: So your solution

        just because you need a wheel, doesn't mean you have to include the cart it was built for.

        maybe you don't need the whole library, just the one function

    3. katrinab Silver badge
      Megaphone

      That’s easy, in most cases, the third party library will be better and more secure.

  5. Imhotep

    We Just Advise, We Don't Implement

    I have nothing but compassion for those whose jobs it is to secure their company's infrastructure.

    It seems to me to be an impossible, never ending and thankless task with new exploits appearing every week.

    As for the suggestion that each company verify the software for this kind of thing: Are they serious?

    For just one, where are all of these new experts with access to every app's source code supposed to come from?

    I don't think we'll see any improvement on this front under the current circumstances. At present, it is all reward and no risk or repercussions for this type of behavior.

    1. elip

      Re: We Just Advise, We Don't Implement

      >For just one, where are all of these new experts with access to every app's source code supposed to come from?

      I was one of these "experts" as you call them. It was part of my sysadmin gig. Maintain the tool chain, audit new toolchain requests, continuously audit the infrastructure, move slow especially when devs want you to move fast, etc. This wasn't exactly a safety-critical industry either - we made consumer electronics - tvs, computers, phones, cable set-top boxes, walkie talkies, etc.

      Most of us were laid off, I suppose you can just re-hire us from the unemployment lines?

      The goal is not to get access to *every app's source code*, the goal is to not even allow the app onto your network to begin with. It's really not that hard man. This place where I worked at was around 2004 timeframe, and it was for sure happening decades before then in safety-critical work spaces. Lets stop making excuses for doing the responsible and prudent thing, for the sake of cheap, low-quality and often unsafe goods.

      1. Imhotep

        Re: We Just Advise, We Don't Implement

        If you were a sysadmin, you were not one of the experts I was referring to. Security is now a specialized field and involves much more than the tasks you mention.

        I doubt that there are many unemployed security experts at this time.

        1. martyn.hare

          …and people hire cheap idiots for those roles now too

          Was only a couple of days ago that I “failed” a question on a security questionnaire for “not naming the hypervisor” when I put Hyper-V. Security is a box checking exercise in many companies (so they can get “cyber insurance”) rather than a properly observed process. People would rather pay £300 to a box checking moron than £30,000 to someone who will tear everything apart and help you rebuild everything with security in mind.

  6. storner

    Focus on detection, not prevention

    Requiring that normal software customers must verify the integrity of software distributed via official channels is completely impossible.

    Get some detection systems instead and keep an eye out for behaviour that indicates a compromise. That will catch not only supplychain attacks, but also any other kind of attack.

  7. smudge
    Mushroom

    the sunny island of Heraklion

    ENISA, which is soon to be dragged from its Greek home – split between capital Athens and the sunny island of Heraklion –

    Last time I was there, Heraklion was the capital of, and firmly attached to, the sunny island of Crete.

    What happened? Has there been another Santorini (pictured)?

    1. David 132 Silver badge

      Re: the sunny island of Heraklion

      Don't ask me. It's all Greek to me.

    2. eldakka

      Re: the sunny island of Heraklion

      > What happened? Has there been another Santorini (pictured)?

      Heraklion firewalled itself so hard that it became a new island.

  8. Adrian 4

    evil code

    Could we not implement a form of RFC3514 for executables ?

  9. sitta_europea Silver badge

    You mean, like, Windows?

    1. Anonymous Coward
      Facepalm

      Or Linux?

  10. mhs1973

    the fine line...

    .. is what you walk, as a sysadmin. Not that I should have to tell that to anyone reading here.

    That fine line makes the difference between a usable system, and a 99.999% safe system.

    Even if you have proper change control, even everything that you can think of testing checks out, there will be something you missed, developers, testers and sysadmins alike.

    That is, why bug-bounties need to be there in commercial software (and be paid out, not reneged on), and that is why you as a developer of free software can never act like there is nothing wrong with your code, when someone asks a (hopefully polite) question.

    Finally the penalties for publishing exploits and the use of them need to be so draconian that nobody will even dare to think about it. And that goes for anyone, no exceptions.

    1. Imhotep

      Re: the fine line...

      I agree with everything you say, but draconian penalties are the problem. I've heard lots of suggestions, but frankly none that have seemed workable.

      The catch is that so many of the bad actors are protected or actual arms of rogue states - Russia, China and North Korea being prime examples. You can expect no cooperation from them in addressing the problem. And frankly, they can expect no cooperation from the West in addressing exploits by our governments.

      If prosecution and extradition aren't realistic options, what is your alternative? There has been talk of classifying these as acts of war, but the truth is that our banking and power systems are just too vulnerable to risk that kind of reprisal.

  11. Anonymous Coward
    Anonymous Coward

    MITRE's ATT&CK too generic?

    While Lockheed KillChain is quite simple Att&ck is not - and it looks to me complex enough to be not so friendly to use. Is ENISA going to release something made of thousands and thousands of too detailed description no one will use because it soon will become unmanageable?

  12. Julz
    Trollface

    How

    About paying a reasonable amount for code written and tested by competent engineers? Oh, yes now I remember, that's what people used to do but now it's so much better.

  13. Anonymous Coward
    Anonymous Coward

    Ah......it's the fault of the programmer..............

    ............but WHICH programmer?

    // C example

    #include <stdio.h>

    #include <string.h>

    #include <math,h> // ................. and so on.............

    // your VISIBLE programmer at work here......not in the three lines above!

    // end C example

    # Python example

    import os

    import shutil

    import subprocess

    import glob

    import sys

    from time import perf_counter

    from gi.repository import GObject, Gio, Gdk, Gtk ## .....................and so on..........................

    ## your VISIBLE programmer at work here......not in the seven lines above!

    ## end Python example

    .......and then there's the problem -- in both cases -- that the version release systems are turning over changes at quite a rate.

    .......good luck to anyone asked to "check the delivered code".......................

    1. John Brown (no body) Silver badge

      Re: Ah......it's the fault of the programmer..............

      One mitigation might be to not be constantly pulling libraries but sticking with the one you first downloaded. Not every bug fix is something you need "now". If you have a version with a bug that need fixing and a later version has a fix, pull that and read the change logs. If it doesn't need fixing, and the change logs don't show anything useful to you now, leave it till the next time or later. In some places, that is what they already do, ie proper change management.

      In particular , all those websites and web apps that pull in libraries every time they execute. As mentioned above, next time it runs, the string padding library might not be there, or maybe it's changed. The very fact that web pages/apps are pulling in 3rd party libraries because that's they way they are licenced should be a red flag to all devs and something not to be considered lightly. If you can't write your own, look into a self-hosting licence or look elsewhere whenever possible.

  14. Ozzard
    FAIL

    Your mission, should you choose to accept it...

    ... is to verify:

    * the processor and system architectures for side-channel attacks, such as power or speculative execution;

    * the microcode on the CPUs;

    * the code on the management processor on each CPU die;

    * the firmware on the network cards, disk controllers, and everything else that can DMA or can affect data ($deity help you with Thunderbolt);

    * the microcode and firmware running on the flea on each server;

    * the BIOS;

    * the entire code of the kernel you're running and any loadable modules;

    * the entirety of the user space of the operating system(s) you're running;

    ... and *then* you can get onto your own application(s) and the third-party libraries on which they depend.

    No, you can't rely on these being checked against some suitably complex hash (remember that MD5 and SHA-1 are both considered compromised, so it'll have to be better than those) - how did you obtain that hash, and how do you know your channel to obtaining that hash hasn't been compromised?

    No, you *really* can't rely on downloading the application and then comparing against the hash that you... wait for it... *downloaded from the same site*. Pure security theatre.

    No, you can't rely on the browser or program you are using to download code or hash being uncompromised. Or, for that matter, the code you are using to calculate the hash.

    No, you can't rely on your firewall. How do you intend to verify its firmware and its application definitions?

    No, you can't rely on your network switches for data transfer. How do you intend to verify the switch's data and control planes, and its management software?

    No, you can't rely on printouts. How do you intend to verify the application producing the printed version, the printer driver, the printer firmware?

    No, you can't rely on your verification tools. How do you intend to verify them?

    Second point: "Doing it right" would cost more than the entire revenue of most businesses - which means 100% chance of failure of the business. That's a higher chance of failure than "ignore it and hope it never happens to us". So, quite correctly, businesses try to hit the sweet spot of minimum overall chance of failure of the business - which means the standard risk management approach of choosing which ones you even bother trying to mitigate.

    Final point: Overall - and I expect to be roundly downvoted for this - if the risk management is done without rose-tinted glasses, *this laziness is good for humanity*. There's no point spending more effort on verification than it takes to recover from the attacks that succeeded due to missing or failed verifications.

    1. EnviableOne

      Re: Your mission, should you choose to accept it...

      Simple:

      "Build your network as if the endpoint is owned.

      Build your endpoint as if the network is owned. "

      Don't trust anything, set constraints so you can only send what you have to, and only collect what you have to.

      TurstNo1 - VerifyEverything - BeParanoid - BeASecurityProfessional

  15. Lorribot

    lack of a clue on the basics

    We ahve a supplier that does not even digitally sign its executables. Not a same two bit outfit either.

    If you can't even get these people to do the basics what hope is there? Also what hope is there if your own company does not make it mandatory or even really care as it will stop this important project?

    1. Michael Wojcik Silver badge

      Re: lack of a clue on the basics

      Yeah. Code signing certainly isn't a silver bullet, and plenty of attacks have bypassed it in various ways. Organizations are often very poor about key hygiene. But defense in depth, man.

      Of course, it hasn't been all that many years since I last saw an organization that didn't even use a change-management system for source code. Even the most basic good practices will go ignored in some places.

  16. John Savard

    Source Code

    Ah, those idealistic Europeans. As if the programs they purchase from major software companies came with source code that could be verified.

    It's software vendors that have to become more responsible, not their helpless customers.

  17. steelpillow Silver badge
    Facepalm

    ENISA-nough to eat

    Nice to know that ENISA has validated all the code on the PC it wrote its report on. I wonder if they'd care to publish the list?

  18. FlamingDeath Silver badge

    In the real world, if I build say an aeroplane, and I built wings which were never stress tested, my sums suggest the wings can take n amount of stress, but if I never test that, I never really know. If the plane fails, I potentially go to prison or some other thing happens, TO MAKE SURE IT NEVER HAPPENS AGAIN, lessons learnt, laws changed maybe, and all that crap

    Welcome

    To

    Software

    Where

    Testing

    Is rare

    And

    The

    Punishment

    For

    Not

    Doing a good

    Job

    Is

    Minimal

  19. etoven

    Verification of the entire supply chain doesn't seem practical or possible. A project could have a dozen third party dependacy and the dependacies could have third party dependacys of its own.

    1. Ozzard

      Possible? Clearly yes, for small enough and/or critical enough projects. One ex-colleague of mine wrote his own BCPL compiler for PDP-11, which he bootstrapped from his own PDP-11 assembler, which he originally hand-assembled. Then he wrote his own OS using that compiler. I didn't check what he used for storage and access to the PDP-11 while doing this, but it wouldn't surprise me if he went from scratch there as well.

      Practical? That's a cost-benefit analysis :-).

    2. Anonymous Coward
      Anonymous Coward

      A dozen? Even the smaller projects I work on have dozens of direct dependencies, and the graph of transitive dependencies is a Lovecraftian horror. And that's despite a conservative development culture and imposing some friction, such as having to register every dependency and have it approved by a central team.

      But at least we're tracking them and scanning for published vulnerabilities and the like, which seems to be more than what many, perhaps most, organizations do.

  20. DCdave
    Facepalm

    I feel so foolish

    It seems so obvious after reading the report, as a sysadmin I should just read the code of all the (in many cases closed-source) software running in the data centre. Why didn't I think of it before? Then we'd be safe.

    1. Yet Another Anonymous coward Silver badge

      Re: I feel so foolish

      Remember to also check the masks for the processors

      And if you believe Bloomsberg you also have to look for invisible microchips on the motherboards that talk to China

  21. Michael Wojcik Silver badge

    Not ENISA's best work

    This report is a bit bizarre.

    Their definition of "supply-chain attack" is at least spelled out pretty well, but it's also a bit arbitrary. They require "a supplier being compromised", so for example uploading malicious packages to public repositories doesn't count unless you actually replace someone else's package – even for "brandjacking" cases (chapter 6).

    And even given their restrictive definition, I'm not sure how NotPetya – by far the most damaging supply-chain attack thus far – didn't make the cut. It involved planting a malicious update in the update stream for the M.E.Doc software package. That update was then used to compromise systems that had M.E.Doc installed. The payload was a worm, so it spread much further, but the initial infection vector was supply-chain.

    That said, the taxonomy they propose is straightforward and easy to use. And they have an icon of a hacker as some dude in an anorak, so they're not flouting tradition.

  22. Joe Montana

    Segmentation...

    Supply chain attacks can and will happen, you can't prevent this entirely without EXTREME cost but you can reduce the risks cost effectively by increasing segmentation.

    Instead of having a big flat network with a single authentication domain where a single breach gets you a foothold and an attacker can easily escalate to take control of everything, you have ever service segmented and partitioned off, with active monitoring on the interconnects.

    You may not be able to prevent every attack, but you can hopefully detect an attack and keep the damage contained.

  23. Robert Grant

    > The UK is taking advantage of Brexit to lower incident reporting thresholds in security frameworks first created by EU laws

    Pretty sure this was already possible if we'd wanted to do it.

  24. YetAnotherJoeBlow
    FAIL

    "It's not feasible to ask every org to break out disassemblers, source code editors, and network and memory analysis tools, and have staff on hand capable of using them, to inspect every update, be they open or closed source."

    Well that highlights their ignorance.

    Whoops I am rioting again.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like