back to article Rubbish software security patches responsible for a quarter of zero-days last year

To limit the impact of zero-day vulnerabilities, Google security researcher Maddie Stone would like those developing software fixes to stop delivering shoddy patches. In a presentation at USENIX's Enigma 2021 virtual conference on Tuesday, Stone offered an overview of the zero-day exploits detected in 2020. A zero-day, she …

  1. Mike 137 Silver badge

    "We need correct and comprehensive patches for all vulnerabilities from our vendors," she said.

    Actually, we need tolerably bug free software to start with from all our vendors.

    I've never been able to quite get my head round the assumption that it's perfectly OK to release a product riddled with flaws and rely on point fixes after the fact to eliminate them.

    In no other branch of engineering would this have passed muster in the past, but now software has permeated most other branches of engineering, their standards are being dragged down to that level as well.

    This is not a trivial issue - lives are already being lost as a result.

    1. Pascal Monett Silver badge

      I definitely agree, but we all know why this has come to pass : the availability of the Internet.

      As much as I simply cannot do without it these days, back when it was barely available with a 56K modem, software makers had to get it right out of the door, because patching was inconcievable. You would have had to create tens of thousands of floppies, make deals with computing magazine vendors and publish that you had a patch for your product. The cost alone would have been enough to make beancounters faint.

      No, back in the day, you got it right before getting it out. But today, we don't need to do that any more. We get it just about right and ship it, secure in the notion that, if somebody happens on an issue, well, we'll post a patch and all will be well.

      What is worse is that patching is apparently become a shoddy process as well. The article states :

      So he analyzed the patch and found it was incomplete – it addressed one way of exploiting the bug but not another. So that led to CVE-2020-6883 and another patch. But that patch caused other problems, so an updated patch was issued

      The coder patched the problem he was submitted, he did not analyze the issue in its entirety. Then he had to patch the other part, and screwed that up so he had to re-patch. Three patches for the same thing. That is shoddy programming, because we can.

      1. iron Silver badge

        > barely available with a 56K modem

        ROFL. Apparently you never had the joys of 28.8, 14.4 or BBS over the venerable 2,400 baud. In comparison 56k was lightning fast.

        1. adrianrf
          Facepalm

          2400? bloody luxury!

          hell, I’m old enough to remember accessing remote systems at 300 baud.

          1. Anonymous Coward
            Anonymous Coward

            That was how I started - a Maplin self-build 300 baud modem kit and BBSes. Well before there was an internet.

            Folded aluminium chassis with a black powdered front panel and dark blue powdered top, it was. It might still be in the garage somewhere.

            Got into major trouble when it turned out the BBS I used for hours each night was actually a long-distance call. My dad hit the roof when that quarterly bill came in.

            1. Amused Bystander

              Hah I laugh at your 200 Baud modems

              I developed a bunch of BS6403 modems for two companies - they were replacing the old 50 Baud +/- 80V telex system.

              BS6403 used V21 signalling but S_l__o___w____e_____r......

      2. tiggity Silver badge

        "The coder patched the problem he was submitted, he did not analyze the issue in its entirety. Then he had to patch the other part, and screwed that up so he had to re-patch. Three patches for the same thing. That is shoddy programming, because we can."

        It may well be that programmer gets a set of test cases to prove problem and makes a fix to the test cases - job done, but test cases did not cover all eventualities.

        Its naïve to assume the programmer has same skillset / knowledge as the attackers, so quite likely the programmer would be blissfully unaware of other attack variants.

        These sort of bugs really need someone with a hacking / attacking / pen test mindset to spec the test cases that the programmers should fix.

        If e.g. some of these bugs were essentially in a JS complier / engine then cannot assume the devs had deep JS knowledge. A classic comp sci uni project is (well was, back in the day when I was at uni, may have changed) to write a compiler for a language you had never used before / knew nothing about (compiler writing did use a language you knew) as a classic demonstration of working from a spec (typically would be a compiler for a "made up" language, so students could not crib stuff online)

        1. TheMeerkat

          "The coder patched the problem he was submitted, he did not analyze the issue in its entirety“

          It is called “Agile development”.

          You just do what the post-it note on a whiteboard says and move on. Or your Product Manager and Scrum Master will be unhappy.

          1. Charlie Clark Silver badge
            Stop

            There's enough software full of holes that was developed before agile development became a thing. In fact, agile's reliance on testing does at least give you a basis for fixing stuff.

          2. Claptrap314 Silver badge

            As in everything

            There is proper Agile, and there is the garbage that gets called Agile.

      3. Charlie Clark Silver badge

        As much as I simply cannot do without it these days, back when it was barely available with a 56K modem, software makers had to get it right out of the door, because patching was inconcievable.

        I don't know what planet you were on, but I don't remember bug-free software in the 1980s. Without a permanent connection, the vectors are different but exploits are as old as software itself.

    2. Anonymous Coward
      Anonymous Coward

      Re: "We need correct and comprehensive patches for all vulnerabilities from our vendors," she said.

      Agreed, but software is hard and takes time and time is money and patches don't make money so short cuts and/or a lack of care is inevitable (alas).

      Perhaps software engineering in the old days was better the hardware less complex/powerful and one had to understand the hardware to get the best out of it. But we're expected to build everything quickly these days so yes we use 3rd party libraries put software together like high-stakes Lego.

      As everything is connected to the internet its far more likely that your software is going to be probed for weakness far more than the old modem days . It would be interesting to see how many security flaws are in this old software which developers apparently "got right to begin with".

      I'm not defending modern software buggyness, one under duress never does their best work.

      1. Mike 137 Silver badge

        "one had to understand the hardware to get the best out of it."

        One still does have to understand the hardware to get the best out of it. Otherwise you'll just get the minimum out of it compared with what's possible. The problem is that this has been largely forgotten in the race tro beat competitors products 'out the door'.

        Software is, always has been and always will be, no more than a way of generating the signals that control hardware. The majority of vulnerabilities at the 'metal level' originate in failure to take account of the physical nature of the hardware, but that's hardly taught these day - not at all in commercial development or even security practitioner training. Indeed when I included a reference to Boolean algebra in the content of a security practitioner training course I was told by the approving body to take it out because the candidates wouldn't know what that meant.

        Sadly, the only really informed folks in the software domain to this level of detail appear now to be those who write and those who independently discover the vulnerabilities.

        However the problem could be reduced hugely by the simple expedient of proper testing. By proper I mean testing exhaustively for what should not happen as well as what should.

        1. Primus Secundus Tertius

          Re: "one had to understand the hardware to get the best out of it."

          @Mike137

          You are so right about proper testing. Two reasons why it will not happen.

          1. Testing comes near the end of the project. Given all the overruns in the early stages, there is never enough time to test all the fail cases.

          2. If the fail cases are tested, they will show that the design did not consider the fail cases. Probably because the design document did not specify how failures should be treated.

        2. hoola Silver badge

          Re: "one had to understand the hardware to get the best out of it."

          Based on what I see where I work the only purpose of testing is to provide a list of passes so that you can go live. It matters not what you test, only that all the tests pass. The result is testing that is not fit for purpose a significant amount of the time spent on the task it performing test for things you know will pass.

          Occasionally they get a surprise when something they expect to pass fails and then everything goes into meltdown whilst they figure out what happened.

    3. Boris the Cockroach Silver badge

      Re: "We need correct and comprehensive patches for all vulnerabilities from our vendors," she said.

      I can only upvote this once, but I'd cheerfully upvote this 1000 times.

      In my game , there are no bugs in the code, if our boss wants 20 000 widgets made , then the code must be right.

      We test.. and we test, and we run slow motion tests, verify tool paths, verify the robots are coming in and clamps are firing in the right order, because its not a case "oh well.. whats a bit of data loss?" but the likelyhood of pieces of metal being fired out of the machines if we get it wrong.. (and yes, it does happen)

      Again with our customers, they know if they screw up, its not a matter of "oh well lets patch it", its a case of "oh f*** the landing gear is staying up because numb nut didnt design the correct clearences when they sent the parts to be made"

      Until software is treated as a proper branch of engineering, you will always get the bugs, but the correct mindset in the first place will say "we will get bugs.. but we will test them out of existence before the product has a chance of losing all your data/flying the aircraft into the ground"

    4. rcxb Silver badge

      Re: "We need correct and comprehensive patches for all vulnerabilities from our vendors," she said.

      release a product riddled with flaws and rely on point fixes after the fact to eliminate them. In no other branch of engineering would this have passed muster

      In the physical world, there are a number of engineered structures that require almost constant maintenance. Everything from non-stop re-painting of large bridges, to road maintenance, consumable items with expiration dates, and more.

      If you treat software as a single deliverable that must work perfectly when delivered (like an air conditioner, bicycle, etc), then you rather have to forego any expectations of software updates (whether for security or new features) as well. After all, you signed-off on the perfection of that deliverable unit, and in analogy, the manufacturers of those equivalent items aren't going to come out and improve units they've previously sold you.

      1. Lorribot

        Re: "We need correct and comprehensive patches for all vulnerabilities from our vendors," she said.

        I believe a certain Mr Musk does do engineering and rocket science this way. The amount of testing done is in proportion to the amount reequired by teh overseeing body, so with Nasa he has to do a lot, but with cars less so..

    5. Charlie Clark Silver badge

      Re: "We need correct and comprehensive patches for all vulnerabilities from our vendors," she said.

      I've never been able to quite get my head round the assumption that it's perfectly OK to release a product riddled with flaws and rely on point fixes after the fact to eliminate them.

      In the US product liability explicitly allows this. OTOH, you must also accept that many exploits are based on code that was perfectly correct at the time of its development but may now be being used in a different environment, so some kind of leeway is required. And you have to think about handling the huge volume of open source software out there. That said, encouraging a sense of liability can only be welcomed.

  2. PassiveSmoking

    Fast Vs Right

    I think the fundamental problem here is that if a zero-day is discovered, there's an urgency to get a fix out as soon as humanly possible and stop it being exploited. This is entirely understandable, but as illustrated in the article, it can and often does lead to incomplete fixes and ultimately a dev team playing whack-a-mole for a while as new exploits emerge that work around the partial fix.

    Of course the time should be take into do things right, properly understand the root cause of the problem and comprehensively patch it, but that takes time. In the meantime the bug is being exploited. An incomplete patch is still better than no patch at all.

    And then we have the problem of management not understanding that a quick patch isn't guaranteed to be a comprehensive fix and considering the matter solved as soon as there's a patch out, and therefore unprepared to allocate further resources to a problem that they think is already solved.

    Dealing with zero-days really ought to be a 2-step process:

    * Get a patch for the issue out as fast as possible

    * Use the time bought by the patch to do a more thorough code analysis and get a more comprehensive fix out before anybody can find workarounds for the patch

    We've got step 1 down, but step 2 doesn't happen nearly as often as it should.

  3. adam 40 Silver badge
    FAIL

    Don't upgrade

    Simple(s) as that.

    Use older software that no-one cares about, that doesn't have the other elephant in the room:

    Zero-day exploits from new functionality (that you probably don't even need, but was dragged in along with all the other crazy updates on the never ending treadmill.)

    Who bothers attacking old versions of redhat or ubuntu these days?

    1. Anonymous Coward
      Anonymous Coward

      Re: Who bothers attacking old versions of redhat or ubuntu these days?

      Well, presumably you might just have a program persistently scanning any connected systems, attempting a system identification, and then running through a list of relevant exploits. So why make a special effort to *not* attack old versions?

  4. iron Silver badge

    How much is this Google's fault?

    To limit the impact of zero-day vulnerabilities, perhaps Google could stop publishing them before people have a patch ready? Perhaps those patches would be better quality if the devs were given a few extrra days to work on them rather than being blackmailed by Google into releasing early? You can't have it both ways Google.

    1. Charlie Clark Silver badge

      Re: How much is this Google's fault?

      This is naive. First of all, if Google has discovered a bug assume someone else has as well. Google is only doing research spy agencies and criminal organisations have even greater incentives to discover and exploit flaws. Secondy, while there have been a few notable exceptions, I think the trend is clear that Google's approach has helped improved practices across the industry: remember they give companies 90 days before going public and, in general, 90 days should be more than sufficient to analyse, evaluate and fix a bug. What can't always be done in the 90 days, is perform sufficient variant analysis to see if similar bugs exist. Fuzzing tools do exist, but even they are limited in scope.

  5. vtcodger Silver badge

    24 known zero days.

    There were 24 of them in 2020,

    That's 24 that we know of. Anyone want to be there are more out there that no one has noticed yet? After all it seems to have taken about 15 months to realize that SolarWinds had been compromised. Is there are reason to believe that SolarWinds was an outlier?

    BTW -- I was doing software test and configuration control in the 1960s before most folks around here were born. It wasn't at all unusual back then for us to be testing patches to patches to patches -- sometimes because the developer didn't fully understand the user's use case and fixed the wrong thing, but more often because there were related issues that no one thought of. It comes as no surprise to me that the situation doesn't seem to have changed much. Something to ponder -- can this cloud thing work if, in the long run, we can't secure the internet?

    1. EnviableOne

      Re: 24 known zero days.

      there are thousands out there, but only 24 were discovered being exploited in the wild.

      TBF for some of the 24, while fixing the existing one, they found a couple of related ones and patched them too.

  6. DS999 Silver badge
    FAIL

    Easy to say

    Sure, "stop providing patches that don't completely fix the problem" is a great goal. So is "write secure software in the first place".

    How about telling us something actionable, like "make sure to check your source code for any re-use of the same code sequence that caused this vulnerability". Telling people "do a better job" is pretty useless without telling them how.

    1. EnviableOne

      Re: Easy to say

      try checking for OWASP top 10 exploits before releasing software.

      the top 10 have been effectivley the same since it came out.

  7. ecofeco Silver badge

    Anyone surprised?

    When you run off the good programmers, institute stupid crap like Agile, fork every language for the LOLZ, pay crap wages, outsource to the least experienced groups and generally screw over anyone who advocates for quality, things tends to end up like this.

    But WTF do we plebes without billionaire connections, know?

  8. Lorribot

    One of the problems faced by maintainers of large codebases such as Microsoft is where the codebase has a huge amount of legacy 8, 16 and 32 based bit code compared to say something like Chrome or FIrefox. Maintaining and re-writing stuff so it doesn't break anything else is probably a mamoth testing task. Chrome itself will also have a lot of legacy code from its early open source days.

    Security requirements and awareness have moved massively even in the last 5 years, it is unlikely anyone building an OS from scratch now would have a security model or codebase that was anywhere as leaky or creaky as the most common ones around now.

    Backwards compatibility is not as beneficial as secure by default, it can also be achieved by other methods as Apple has shown.

    It would be interesting to see what Windows would be like without any of that early code in there and as a true 64 bit OS on full proper 64 bit hardware rather than the hobbled partly 64 bitness it has on the x64 hardware. Might actually be a bit more secure.

    Note: IE will be around and patched for another 10 years at least as it is still used as a rendering enging by the OS for a lot of stuff.

    1. EnviableOne

      TBF microsoft like saying they rebuilt things from the ground up, when actually what they did was take the top layer off the old one and and some more shiny bugs and a tweaked gui ontop.

      Apparently original Edge was new from the ground up, but its amazing how many CVEs got patched in noth it and IE.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like