back to article It may take decade to shore up software supply chain security, says infosec CEO

The more cybersecurity news you read, the more often you seem to see a familiar phrase: Software supply chain (SSC) vulnerabilities. Varun Badhwar, founder and CEO at security firm Endor Labs, doesn't believe that's by coincidence.  "The numbers are going to go from 80 to 90 percent to maybe 95, 98, 99 percent of your code in …

  1. ecofeco Silver badge
    Windows

    Always releveant article

    https://medium.com/@antweiss/learned-helplessness-in-software-engineering-648527b32e27

    1. Version 1.0 Silver badge
      Terminator

      Re: Always releveant article

      We need a lot of discussions about cybersecurity prevention but just talking about "solutions" is only a discussion ... so many discussions everywhere sound wonderful but then we discover problems when we implement our "solutions" ... it's a bit like talking about how we can get to the top of Mount Everest; do we need to ride a bike, ski up there, drive a Tesla, or maybe even walk?

      I'm not complaining, this is just the software environment that we've all lived with for years now - We have to describe a problem, wait for a solution, install an upgrade and then see a few new problems so we buy a new computer but starting seeing a few problems again that may have evolved from the original infections. We all want easy access on the Internet but thinking about how nice it is for us normally means that we often don't realize that easy access has become the wide problem. For years now I've been thinking that we need an icon for our security discussions - a pair of wire cutters - LOL El Reg, this is me laughing about the icons every time, this icon choice is so much better!

    2. nintendoeats

      Re: Always releveant article

      Hum, sounds just like my last employer.

  2. sitta_europea Silver badge

    A decade to get it fixed? I admire the optimism but I don't think I can share in it.

    1. Doctor Syntax Silver badge

      Share/spare? I was going to say exactly that. We don't. Those parts of it that don't understand the meaning of "urgent" need to get dropped.

  3. CowHorseFrog Silver badge

    The primary cause of the secuity problem is the leadership and management. Its a terribke combination of greedu arseholes who dont care and will sell out everyone for anothe dollar, and basically of them have no idea of what the problem is and thus they always fail with their uninformed dumb decisions.

    1. ecofeco Silver badge

      I see a tech douche bro downvoted you.

      Have my upvote!

      1. CowHorseFrog Silver badge

        @Eco

        Probably got a down vote from another parasite management type... who dont want the status quo to change otherwise they would have no job and be working at maccas making burgers and chips where they belong.

    2. Michael Wojcik Silver badge

      Anyone who says "the primary cause of the security problem" doesn't understand security and doesn't understand IT.

      There is no single primary problem. There will not be any single primary solution. Complex problems do not have simple descriptions and rarely have simple solutions.

      As usual, CHF's obsession with corporate leadership has just resulted in an adolescent reduction that offers no actual insight and has no explanatory power. Try thinking critically.

      1. CowHorseFrog Silver badge

        O course theres not a single cause, but the stpiduty and greed of ceos is a majority cause that starts a bad trend that keeps on causing more problems.

  4. HuBo Silver badge
    Holmes

    The tip of the iceberg lettuce

    Good points! And I guess the way AI is coming in to this is by stochastically moving the target, in all directions, at random times, subtantially compounding the SSC trustworthiness assessment issue (if I understood right). Apparently, when AI systems are trained to imitate humans, they might end up acquiring goals of agency, that, for the more devious of them, could lead to the "willful" production of malicious code, infested with viruses, worms, and backdoors, that are superhumanly clever in their designs, and that are cranked out at rates faster than can be achieved by the most chubby of cybercrims. It seems then that Badhwar and the Endor Labs gang will have their work cut-out for them (for decades indeed) addressing that vast automation in genAI's self-motivated cyberthreat code manufacture (including those ever-evolving serpentine genAI-produced strategies for evading detection).

    A great business to be in going forward!

  5. ColinPa Silver badge

    AI systems

    If AI systems are being trained on available materiel, does this mean these systems will repeat what it has learned?

    1. Snowy Silver badge
      Holmes

      Re: AI systems

      More likely repeat the mistakes of the pasted but do do worry "lessons will not be learned"

    2. ecofeco Silver badge
      Mushroom

      Re: AI systems

      Why do you think some of us are saying GIGO as loud as we can?

      And trying to point out the obvious consequences? Which will not be just some random localized failure, but a worldwide catastrophe.

    3. Michael Wojcik Silver badge

      Re: AI systems

      does this mean these systems will repeat what it has [sic] learned?

      Not in any useful sense, no, though plenty of Reg commentators who can't be bothered to learn anything about LLMs will tell you that.

      The problems with using large, deep autoregressive models trained on large, dirty corpora, with no meaningful interpretability or explicability, for generating anything used for any serious purpose, are many and complex. They certainly don't boil down to "it will repeat what it has learned" (even if sometimes you can find a starting vector and gradient that will do a significant amount of that). They're much, much worse than that.

      Broadly, the problems with using uninterpretable/inexplicable models fall into two categories: the output of the model, and what happens to the people using it. It's a mistake to focus exclusively on either category, much less on a single problem or class of problem within one category.

    4. CowHorseFrog Silver badge

      Re: AI systems

      Given AI learns from the web, and the web is full of idiots, how can AI possibly larn anything useful ?

      WHen you remember that - the Bing chat bot going nazi makes perfect sense.

  6. Anonymous Coward
    Anonymous Coward

    Shore out with offshoring + AI

    No longer necessary to have your employees train their replacements! At least that is the promise.

    1. Anonymous Coward
      Anonymous Coward

      Industrial espionage

      That may be much of the reason why Microsoft and others try to push AI copilots into near everything. They not only have the supposedly better paying versions, but also the hard to turn off "free" version littered over the operating system and (office and more) other software.

      This combination plus their "classic" telemetry allows them to gain massive amounts of data on how users, including people using their software for doing their work, do their work. In other words, they have access to mountains of information allowing them to learn how to professionally and efficiently organize work. In many companies, the work flow or how individuals and the company or work place as a whole organize their work is a big part of their success and competitive advantage. I'd go as far as to say that it often is more important then the exact technical expertise available so long that the latter is adequate. It's the classic theoretical scientist versus pragmatic scientist or engineer dilemma. The first has better knowledge of the technology, but often fails to make commercial products let alone successful products based on it. The later may lack some in depth knowledge of the technology, but has plenty of knowledge and skill in just making it work and profitable.

      Microsoft and others AI tools may be of questionable use to actual users, but so long they accept them (or fail to turn them off), the data is slurped and send to the mothership. Even if Microsoft and OpenAI and Meta and Alphabet and... can't turn it to really usable "co-workers" yet, that data sits in their data warehouses waiting for the AI and data mining technologies to improve.

      In sort, it resembles very very much the biggest industrial espionage of all ages, even if no technical proprietary documents are slurped. Even if all that stolen, as that is what happens during industrial espionage, information can't be put to good use yet, the act and intentions are present. As such, it should legally be treated as to be for what it is: an illegal industrial espionage effort unlike anything ever seen in human's history and be dealt with very strong sparing no one including top execs in a way comparable to an employee of a high tech company like ASML copying GBs of data on a stick to sell to his next employer or his government. To be more precise: that employee only stole one stick worth of information on a single project in a single company, the CEO of a slurping AI firm... needs a sentence more appropriate to the amount of theft ordered on his behalf.

      1. Anonymous Coward
        Anonymous Coward

        Re: Industrial espionage

        If the goal is to produce competitive products, I'm sure that can be a viable strategy. Maybe I'm not thinking hard enough, and the goal is really to not need competitive products - let's call it the Boeing strategy.

        I'm reminded of my uncle (now dead) who over 60 years ago was working as a cowboy on a Texas ranch where a cow got into a large sack of dried beans and ate them all. Over the next 24 hours the beans digested/fermented and the gas expanded but could not all escape -- they were with the cow all night and the vet even cut it open to try to relieve the pressure, but in the end it died.

        Thinking that feeding more and more data to the system will improve it proportionately - where is the evidence? Even when fed John Grisham books, all it can extract is the superficial prose. It can't write a best seller (although it can clog Amazon with junk books). In fact, it can't even write interesting fiction.

        Same with coding. It has utility - it knows a lot of prose so if you have to write a script file in a language you rarely use it is helpful. But it is useless at writing the plot - and that won't change with more data.

    2. steelpillow Silver badge
      Holmes

      Re: Shore out with offshoring + AI

      But that makes your AI an untrusted, unvetted source, created by a previous generation of untrusted, unvetted sources. Quis custodiet ipsos custodies?

      Which is the safer and more trustworthy? Proprietary code, unvetted and unvettable by independent parties, AI that has written/trained itself unto the seventh generation and counting fast, or F/LOSS well visible to the thousand eyes (should they be arsed)? If you were a black hat, which would you drop your evilware into? Just askin' ;-)

      1. ecofeco Silver badge
        Mushroom

        Re: Shore out with offshoring + AI

        EX-DAMN-ZACTLY!!

        We are SO boned.

  7. captain veg Silver badge

    FOSS misunderstood

    Stallman's concept of free software arose out of a desire/need to see the source, to fix the bugs and adapt it to his needs.

    Modern business practice is just fill your boots. Hardly anyone actually looks at the source, let alone audits it. It's free of charge, what a bargain!

    -A.

    1. ecofeco Silver badge

      Re: FOSS misunderstood

      Quality control is a cost center. Customer service is for losers. /s

      I only wish I were joking.

    2. mpi

      Re: FOSS misunderstood

      To be fair, who has the time to do so?

      When "frameworks" were a few K lines of code, sure, you could vet them.

      Whos gonna vet modern JS frameworks? Who is goin to vet libraries in the 100k LoC range?

      More complex software == More time required to vet it.

      1. captain veg Silver badge

        Re: FOSS misunderstood

        > To be fair, who has the time to do so?

        Companies who don't want their reputations trashed by inadvertently shipping malware to their customers?

        Alternatively, programmers could, you know, write their own code.

        -A.

  8. mpi

    "includes reliable software bills of material and better vetting of open-source libraries"

    Yes, small question: Who is going to do the latter, and who is going to pay for the people doing it?

    "Bill of Materials" has a nice ring to it, and sure, it certainly is nice to know whats in the pie, but knowing the ingredients is worth pretty much squat when there are no resources to vet them.

  9. ExpatZ

    Just another attack on opensource, it is much easier to use legal means to force big companies to place undisclosed back doors on closed source softrware than it is to sneak one into open source code that is constantly being looked at by thousands of developers who are specifically looking for such things.

    Notice how the uptick of these attacks on open source coincide with the new spying laws in the US.

    You all should go see what those laws say, it directly impacts the discussion and the security of our systems and data.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like