back to article Have we learned anything from SolarWinds supply chain attacks?

The hack of SolarWinds' software more than two years ago pushed the threat of software supply chain attacks to the front of security conversations, but is anything being done? In a matter of days this week, at least four disparate efforts to shore up supply chain security were declared, an example of how front-of-mind such …

  1. Headley_Grange Silver badge

    OSS

    "..organizations need to first secure the open-source software they use."

    That would involve engineers - you know, people who can understand the use-case, risks, current state-of-the-art, the related security issues, turn them into requirements and then get it (in-house or subbed out) coded, tested into production and maintained as threats evolve. In addtion, those same engineers could sort out your systems so they're both resilient and less prone to attack. Of course, this would cost a lot more than just downloading stuff off the internet for free plus an insurance premium to protect the bottom line when things go wrong.

    1. John Brown (no body) Silver badge

      Re: OSS

      Yes, because "secure the open-source software" is actually the easiest part. It's much harder to secure the commercial code that you can't inspect, relying only on the supplier to do it for you, or inspecting the inputs and output of the "black box". If securing any FOSS you use is your responsibility, then securing the commercial software you use is the suppliers responsibility. Making the commercial supplier legally responsible if it's not secure might be a good start, starting with MS. At the very least, legally invalidating the common licence clause which basically absolves the company making the software to take no responsibility for it even working at all, let alone working properly.

      1. An_Old_Dog Silver badge

        Re: OSS

        Making the commercial supplier legally responsible if it's not secure

        Careful, there ... it's easy to create some knee-jerk legislation which has unintended bad consequences. Such legislation as you propose could easily force the end of commercial mass-market software, including shareware. "Software Security Error Insurance" might be ruinously-expensive, or simply unobtainable. (Note current insurance companies' policy of not paying policy-holders' malware claims by dumping those claims into the you-are-not-covered "state actors/war/terrorism/Act of God/etc. bins.)

        The problem is writing laws which fairly-and-accurately distinguish between honest human mistakes and careless, ignorant, and/or fuck-security-I-don't-have-time-mentality coding (and management acceptance/encouragement/insistance of the same), and bad development processes (got automated builds? got automated regression testing*? got bug tracking? got sane bug-triage processes*? etc.).

        * In the case of the Oracle VirtualBox team, the answers to these are obviously "no" when you look at their bug tracker.

        1. Pascal Monett Silver badge

          "it's easy to create some knee-jerk legislation which has unintended bad consequences"

          As true as that is, it might be time to put an end to the free lunch buffet that companies have been enjoying since the dawn of the Internet. Borkzilla is first in line for never accepting any liability yet is there any count of the man-years that its successive OSes have cost in time and resources ? Of course not.

          I am obviously not advocating that the major OS companies be held liable for every Tom, Dick & Harry's multiple issues - they would shut shop immediately and with good reason.

          But if we can't have a guarantee that the software works 100% of the time, we should at least have a guarantee that the OS vendor has every verification and control in place to ensure that, at least as far as security is concerned, every possible contingency that has been thought of has been addressed.

          Then, of course, it will be the flying circus of clown acts to list all possible contingencies that should bring liability. I'm sure there's quite a list, but not salting and hashing passwords is something that should definitely entail jail time - and for the Board, not for the developers.

    2. trindflo Bronze badge
      Devil

      Re: OSS

      engineers...you're not talking about those people we just laid off are you?

      1. stiine Silver badge

        Re: OSS

        No, that was QA.

    3. Anonymous Coward
      Anonymous Coward

      Re: OSS

      ""..organizations need to first secure the open-source software they use."

      That would involve engineers - you know, people who can understand the use-case, risks, current state-of-the-art, the related security issues, turn them into requirements and then get it (in-house or subbed out) coded, tested into production and maintained as threats evolve. In addtion, those same engineers could sort out your systems so they're both resilient and less prone to attack. Of course, this would cost a lot more than just downloading stuff off the internet for free plus an insurance premium to protect the bottom line when things go wrong."

      Funnily enough, we just had, over the WE, this same kind of conversation with my daughter, about an industrial line of production she's setting up. Classically enough, the whole line is piloted by a workstation, no PLC (a bit of a surprise, though).

      She first said it was WinXP-based, prompting me to almost spit out my drink ! Actually, since it's new, I think she got it wrong, it's probably an industrial version of win10.

      The conversation went on to how they planned to use this workstation and of course, IT security.

      It turns up:

      - they'll use it with a SaaS solution, therefore, internet connected, even if no office work will be done there

      - security awareness is basically at altitudes close to earth's nucleus, as "security == firewall"

      - the IT manager (I kid you not) is ... the procurement director !!!

      Gosh, at least here CV is appealing and she's young !

  2. thx1111

    Is it soup yet?

    Without "repeatable builds", it may be difficult to reliably validate the supply chain.

    1. b0llchit Silver badge
      Facepalm

      Re: Is it soup yet?

      But, but,... that would require expertise and well-designed environments. Not just playing the infinite import game and having the not-my-problem attitude. Who will pay for all that extra work, knowledge and hours? You?

      /s (for those not getting the hint from the tone)

  3. Anonymous Coward
    Anonymous Coward

    Meanwhile...

    ... lots of entities hacked worldwide through a ESXi vulnerability patched two years ago.

    The graybeard sysadmin who thinks the only evil he needs to care for is SystemD - and he doesn't need to patch anything else because it ruins his uptime (and requires work and responsibilities), is the real single point of failure today - and until they become extinct there's no technology or procedure that can make IT more secure.

    1. Anonymous Coward
      Anonymous Coward

      Re: Meanwhile...

      Wait, are you saying systemD doesn't automatically care for itself, somehow? You'd better check, that doesn't sound right, surely the likes of poettering wouldn't inflict such a thing on everyone.

      1. Anonymous Coward
        Facepalm

        Re: Meanwhile...

        QED

    2. Anonymous Coward
      Anonymous Coward

      Re: Meanwhile...

      You are so far removed from the truth. The ‘graybeard’ sysadmins know all this, but are hampered by the cool kids living in a make believe 24/7 world demanding that there mustn’t ever be downtime for maintenance. It’s a reasonable aspiration, but only if you build in the 24/7 capability from day one.

      1. Anonymous Coward
        Anonymous Coward

        Re: Meanwhile...

        Oh sure, it's always someone else's fault - graybeard and childish too... <G>

    3. Anonymous Coward
      Anonymous Coward

      Re: Meanwhile...

      On the contrary, real sysadmins, graybeard or not, understand that uptime alone doesn't rule all, and there is much more to the environment than systemd.

      Perhaps some of those years-old hacked ESXi didn't have actual sysadmin folks looking after them. E.g. consider the possibility they might have had overworked devops types, some of which are more "dev" than "ops", and so the latter area suffers. Some may have had no one taking care of them at all, I've come across plenty of freerunning stuff over the years.

      I expect you'll often find that things get worse when the last sysadmin has left the job, assuming one was there to begin with: some outfits think they can have a QA or coder or administrative staff person "do IT" in their spare time. I've even had to help clean up a situation where the hands-on IT person was more of a facilities caretaker than anything else -- not even the proverbial "electronic janitor".

      1. John Brown (no body) Silver badge

        Re: Meanwhile...

        "some outfits think they can have a QA or coder or administrative staff person "do IT" in their spare time. I've even had to help clean up a situation where the hands-on IT person was more of a facilities caretaker than anything else -- not even the proverbial "electronic janitor"."

        Aaaaand we come full circle. That pretty much describes many of the SMEs I used to visit back when MSDOS was still king of the desktop and Novel was the King of servers. Si9nce it was often the accounts dept/office which got the first PCs, the Finance Director, Accountant or finance office manager was often also the "IT manager".

      2. Anonymous Coward
        Anonymous Coward

        "things get worse when the last sysadmin has left the job"

        I'm not saying all sysadmins have to leave the job - it's only the graybeard one who can't up his skills to the current situation and still believes he's living in the past. That everything has to stay the same, he knows everything, and he doesn't need anything else. A lot of IT today is going backwards to the 1970s for this reason. And systems became much brittler because of that. One day a "great disaster" will strike, and maybe someone will understand the mistakes made.

        1. Anonymous Coward
          Anonymous Coward

          Re: "things get worse when the last sysadmin has left the job"

          Your ageism is showing.

          1. Binraider Silver badge

            Re: "things get worse when the last sysadmin has left the job"

            To say nothing of failing to recognise the advantages of certain design philosophies over the well documented bodge that is MS…

  4. Anonymous Coward
    Anonymous Coward

    1984....2006....And This Is News Today?

    Matt Rose spake thusly: "...organizations have to not only be concerned about malware being injected into a compiled object or deliverable, but also of the tooling used to build them..."

    Well done Matt. But a bit of research might have turned up this....from 1984:

    Link: http://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

    Ken Thompson provided a recipe for hacking a compiler....with little chance that anyone would notice.

    And recent research seems to show that Ken was right to worry...this in 2006:

    Link:https://www.schneier.com/blog/archives/2006/01/countering_trus.html

    Yup....here we are in 2023....and folk have been talking about the problem since 1984!!!

  5. captain veg Silver badge

    where to start?

    "a miscreant can inject malicious code into a piece of software before the compromised software is sent out to customers and compromises those systems"

    Er, how?

    > attackers have targeted code repositories like GitHub and PyPI

    Ah, that's how.

    Don't use them, folks, unless you're prepared to audit the code yourself.

    > and companies like CI/CD platform provider CircleCI,

    Never heard of it, but the name reeks of fad-du-jour.

    > What the CircleCI incident illustrates is that organizations have to not only be concerned about malware being injected into a compiled object or deliverable, but also of the tooling used to build them,

    No. It illustrates the stupidity of making your enterprise reliant on third-party code that you haven't yourself validated.

    That's all.

    -A.

    1. Binraider Silver badge

      Re: where to start?

      <quote>No. It illustrates the stupidity of making your enterprise reliant on third-party code that you haven't yourself validated.</quote>

      So, what do you do with the vast quantity of operating systems, embedded systems, applications, patches, scripts, and $DEITY knows what else. Checksums, standardised disk images? That's not validating the supply chain threat. The list of computers where you can fully audit the entire software chain down to the firmware is rather small. Those there are are either low-powered or very expensive workstations.

      The long standing "plus" of commercial software is having a whipping boy to sue when they inevitably screw up. In closed source land unless you have a lawyer big enough to tackle MS (et. al) you get what you are given, with little say in the matter.

      In FOSS land, collaboration rather than hostility drives improvement. Spot a bug? Report it. If it's important to you and your organisation, contribute to development directly.

      Establishing trust of a repo of pre-compiled software is not without difficulty in either land. Good practice can only get you so far; because clearly you're not going to be writing your own OS and tools for your own organisation.

    2. Anonymous Coward
      Anonymous Coward

      Re: where to start?

      I hate to break it to you, but very few companies that are running Windows have validated that code. Some have. I know this because I've seen the boxes of Windows source-code myself.

      1. captain veg Silver badge

        Re: where to start?

        I seem not to expressed myself clearly.

        So far as I know, Microsoft wrote all the code for Windows itself. I'm talking about projects which pull in code from the public domain.

        -A.

        1. John Brown (no body) Silver badge

          Re: where to start?

          "So far as I know, Microsoft wrote all the code for Windows itself."

          A lot of the stuff included with Windows is from companies they bought, depends on how you define where the OS ends and the included apps start. Not sure about nowadays, but for many years they used the FreeBSD networking stack and credited the Regents of the University of Berkley as per the (very open) licence terms.

          1. captain veg Silver badge

            Re: where to start?

            Do you think that they reviewed the code before incorporating it?

            -A.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like