back to article What to do about open source vulnerabilities? Move fast, says Linux Foundation expert

Automated testing and rapid deployment are critical to defending against vulnerabilities in open source software, said David Wheeler, director of Open Source Supply Chain Security at the Linux Foundation. Dr Wheeler, who is the author of multiple books on secure programming and teaches a course on the subject at George Mason …

  1. Mike 137 Silver badge


    'open source is potentially more secure because of the long-standing secure software design principle that "the protection mechanism must not depend on attacker ignorance,"'

    The long-standing secure software design principle that by no means every open source developer adheres to. Nevertheless the primary security advantage of open source is exactly that the source is open to scrutiny, and there is indeed a valuable if small community that scrutinises much of it. However very few users take advantage of this, as the chief perceived advantage is not that it's secure but that it's free of charge to use.

    But regardless of whether it's open or closed source, the most prevalent source of insecurity is that users don't monitor or patch vulnerabilities assiduously. Given the incidence of issues, that has to be a continuous task, not a fixed schedule periodic burst of activity..

    1. Anonymous Coward
      Anonymous Coward

      Re: Reasons

      is that users don't monitor or patch vulnerabilities assiduously.

      I agree but there are a couple of reason why, apart from the obvious, lazy, can't be arsed, reasons.

      1) It's a PITA. I remember decades ago teaching a class to support engineers for a very popular business OS and one of the students had previously worked on SW for the switches of cellular phone networks. "What to you mean I can't patch a library in a running application without downtime - we've been doing that sort of thing for years..."

      2) SW houses keep changing shit. You load an update to get a security fix and find that all the features of the app you're using have moved, been changed, removed or otherwise broken. Please keep the two kinds of updates apart, if it ain't broken don't fix it, then people might be more inclined to fix the bits that are broken.

      1. Robert Grant Silver badge

        Re: Reasons

        I think (2) is solved by paying enough for LTS editions to be viable.

  2. Claptrap314 Silver badge

    The problem with testing

    Is that very, very few testers are formally trained mathematicians. My first test of any method is always entitled "happy path". It is followed by a bunch of not-happy paths.

    But learning how to write those tests is NOT easy. I certainly would not be any good at it before I was hammered on by my professors for six years.

    1. Tomato42

      Re: The problem with testing

      Also, it requires a completely different mindset than writing code and the associated unit tests.

      Not only is it hard to switch to the "attacker" mindset when writing tests for security critical code, in my experience many programmers are incapable of doing that, especially for code they written.

      1. Kobus Botes

        Re: The problem with testing

        @ Tomato42

        ..."many programmers are incapable of doing that, especially for code they written"...

        That applies to ANYTHING you have written. You do not read WHAT you have written (when checking for errors), but what SHOULD have been written, or what you intended to write, hence missing the error completely.

        A substantial part of my first job involved creating (from templates, to be sure, but we had to modify it) legal contracts, where absolutely everything must be correct to avoid possible problems later on. I was fortunate to have had very good mentors and it was drilled into me that reading your own writing (no matter how perfect you thought it was) had to involve the following steps:

        Read for spelling mistakes.

        Read for grammar mistakes.

        Read for logical mistakes.

        Read for numbering mistakes (paragraphs/sections).

        Read for cross-referencing mistakes (where you refer to something in the same document, e.g. page number, paragraph number, section number, et cetera. These things regularly change).

        Read for meaning.

        Read for consistency.

        Check that your indentation is consistent and correct.

        Check your apostrophes (dotting the i's and crossing the t's).

        Check your capitalisation.

        Let it lie for a day or two (not always possible) and re-read.

        Once it is perfect and there are absolutely no errors, give it to a colleague or two to check.

        Correct all the errors they found and repeat.

        And then, six months down the line when you scan through it, the unseen and unfound errors leap out at you...

        Not the most exciting task to be sure, especially if you are under pressure and short of time, but it has to be done. (Bizarrely, I used to enjoy the process once started, despite the initial reluctance to get going, since I am a troubleshooter, really. Troubleshooting is what interests me, not necessarily the subject matter (although it does make it easier and more enjoyable if it involves something that I am interested in as well)).

        1. Anonymous Coward
          Anonymous Coward

          Re: The problem with testing

          Let it lie for a day or two (not always possible) and re-read.

          I found that phase particularly powerful.

          Another trick to force you to actually read what you have written (especially in combination with the aforementioned two day "cooling off" period) is to read it aloud. That stops you from flash/speed reading. It's not always possible in office situations, but it really shakes out missing words and broken logic.

          1. Adelio

            Re: The problem with testing

            I found that if i had a problem then walking through the code with someone else generally helped.

            "carboard cutout" mode.

      2. Anonymous Coward
        Anonymous Coward

        Re: The problem with testing

        Also, it requires a completely different mindset than writing code and the associated unit tests.

        I agree. I'd go as far as stating that it demands a skillset so different to writing code that the two disciplines are almost opposite.

      3. Claptrap314 Silver badge

        Re: The problem with testing

        It might be different than most have, but to me, a proper mindset to write code implies a proper mindset to write code that actually does what you want and nothing else--which means identifying all the nasty corner cases that testing is supposed to catch.

      4. itzman

        Re: The problem with testing

        The golden rule is always to hand it over to an ArtStudent™ for testing.

        Not only do they not think like you, what passes for thinking is more like random radioisotopic decay. They can break anything without even putting a mind to it.

        Then hand it over to the most evil hacker you can find.

    2. Robert Grant Silver badge

      Re: The problem with testing

      > Is that very, very few testers are formally trained mathematicians.

      We tried a mathematical approach, with formal proofs to verify software.

      Given that didn't work, we now adopt a scientific approach, which is to write tests as a body of evidence that can't ever prove the hypothesis "the software works", but can instantly disprove it.

      1. Claptrap314 Silver badge

        Re: The problem with testing

        I'm curious as to why you hold "that did not work". If you said, "that was too expensive", I would believe you--as formal proofs are generally follow the informal proof, which still require a mathematician to produce in the first place.

        And, as I've stated here before, for even smaller pieces of code, "the software works" is roughly the same as a master's thesis in terms of difficulty.

        But it's the hard corners that you are missing with your approach where the nasty security exploits lie.

        1. Tomato42

          Re: The problem with testing

          You may want to brush up on computer science history, there was this bloke, Turing, that showed rather conclusively that there are algorithms you can't mathematically prove to be correct.

          What we can do, is formally prove correctness of small pieces of code, not whole applications.

          1. Claptrap314 Silver badge

            Re: The problem with testing

            Read my comments? No?

            Unless your program is Turing complete, there is no problem (in theory) of building a checker. In practice, of course, things like state explosion and unstated assumptions make it impractical.

            Yes, I know that Turing completeness is deceptively easy to achieve. But you will never get there with your average app. Your average app isn't the C preprocessor or m4.

            1. Tomato42

              Re: The problem with testing

              Shame you don't know why Turing introduced his machine. Here, learn something:

            2. Logiker72


              For pointing out that a large subclass of programs can indeed be proven correct. The "Turing" argument usually comes from people who want to absolve themselves from implementing any additional measures to improve security. Like C developers who are unhappy of learning Rust.

            3. Logiker72

              "Turing Complete"

              Even if you (say) create a turing complete scripting engine, you can still prove mathematically that ANY script will

              A) consume only as many instructions it is allotted

              B) consumes only as much memory as allotted

              C) accesses only a small subset of files

              Even if that script comes from a fully hostile adversary.

              1. Claptrap314 Silver badge

                Re: "Turing Complete"

                Sure--by limiting its access to memory & time from the kernel, and jailing it's file system.

                If you can prove the OS and file systems are secure.

                If a scripting engine is Turing complete (and most are), then the halting problem fails. So you cannot prove that it only consumes so many instructions.

                And it's pretty easy to write a Turing machine that takes a step to the right after n! steps have executed.

                Or accesses a directory "../" + whatever it did before.

                1. Logiker72


                  The interpreter can simply count instructions executed and the amount of octets allocated. File access primitives can limit the scope of files to be accessed. It will absolutel HALT after the max number of instructions executed.

                  No need for OS-level sandboxing. It would not hurt as an additional measure, though.

          2. SCP

            Re: The problem with testing

            Tomato42 wrote:

            "What we can do, is formally prove correctness of small pieces of code, not whole applications."

            It depends a lot on what aspect of correctness you want to prove, the nature of your application, and your level of interest in establishing the correctness. Some aspects of correctness are considerably easier than others (e.g. proof of absence of runtime exceptions vs correctness of requirement); some types of application are easier than others (IME state machines seem more amenable to formal analysis); and the analysis tends to be more successful if you design with a view to proving the correctness - rather than trying to use formal analysis on whatever rats-nest code falls into your lap.

            Formal analysis does tend to have practical limits - it is not a silver bullet for all software ills, but it is a jolly handy thing to have in your armoury; and the situation is changing all the time as tools and techniques continue to develop.

            To my mind one of the greater challenges is in the use of automated proving tools [e.g. Coq]. These are increasingly powerful - but they tend to give a true/false/undecided result with little insight (for humans) into how that result was reached: in other words they cannot explain themselves well. If you implicitly trust the tool then you might not worry about this (they can be very good) - but if you do not fully trust them you are left with an uncomfortable opaqueness. It will be interestng to see how the research and development progresses on this.

            1. Logiker72


              If you can decompose the system at hand into small parts, you might be able to prove the correctness of the small parts and then proof the correctness of the system as a whole.

              Or the other way around: Excessive complexity bears insecurity.

              1. Claptrap314 Silver badge

                Re: K.I.S.S.

                As I've mentioned before, they "small parts" idea breaks down WAY faster than you expect. My favorite example comes from Newtonian physics. Start with two bodies with certain masses and initial locations & vectors. We can solve this with the calculus. Add a third body. Nope.

                What we HAVE proven is that there exists systems of five bodies such that the entire system escapes to infinity in finite time.

                State machines don't fall off the cliff that fast--oh. Wait. What is BB(5) again?

    3. amanfromMars 1 Silver badge

      Re: an Initial Solution for Flash Crash Testing ..... Greater IntelAIgent Gaming

      Secret messages can be compromised only if a matching set of table, key, and message falls into enemy hands in a relevant time frame. Kerckhoffs viewed tactical messages as only having a few hours of relevance. Systems are not necessarily compromised, because their components (i.e. alphanumeric character tables and keys) can be easily changed. .....

      Relevance is therefore Relative and wholly dependent upon Progress and Success in a Great Future Mood ..... of Almighty Immaculate Being and Virtual Rendering in Physical Phorms ...... Alter Egos of Id

      Such are the Realms Beta Testing Presently ...... honouring the "happy path" method which is very good at excluding all other paths that may compete and seek to propose and impose unhappy future routes. Although why ever anything would travel that direction is something which might even bamboozle a Freudian or Jungian to conclude madness with lashings of badness to exhaust in insane notions and motions the right current diagnosis and persistent accurate enough prognosis for now. In any other time can that both change and be changed to something/anything else.

      Thanks for the chance to practise what is shared and spoken of there, Claptrap314. Every time in Pursuit of Heavenly Excellence, a Win Win with Never Lose Fail Safe Guides.

    4. chuBb.

      Re: The problem with testing

      Couldn't agree more

      Jnr devs learn to hate me when i send there "pass suite" back to them saying 3/4 of the tests are missing.

      Classic missing tests include:

      -verifying that correct exceptions are thrown

      -testing with malformed data

      -Testing the negative path, i.e. test fails predictably and its failure can be considered a pass

      -Integration tests masquerading as unit tests (thats a whole different kettle of fish)

      While code coverage can give you a false sense of thoroughness all thats telling you is how many lines of code are not covered by the tests, NOT how many paths through the code have been tested, and often i find that a lot of tests are testing language features not logic to get coverage higher

      Usually leads to a lot of grumbling until a few months later and onto the next project a maintenance task gets assigned and the value of the missing tests gets demonstrated...

  3. amanfromMars 1 Silver badge

    A Child's Curse and your Government's Choice ..... Them to Pay for the Sins of Fathers and Mothers

    Whenever you can realise and subject Parliamentary democracy [a massive open source software program] to crash test dummy/0day exploitable vulnerability testing, only a monumental fool and corrupted tool would accept it as being fit for any great and good purpose, trying to hide as it does, its catalogues of failures and points of easy immediate malicious access behind a pay wall of mis-spoken truths and tasks further kicked down the road to be done by others at a much latter date far into the future.

    To not consider that worth catastrophically smashing and fundamentally reforming, because of what it does to the health and wealth and well being of your offspring and future generations of human beings, would be something too shameful to admit to and be proud of ...... although it is much lauded, that cowardly conspiratorial inaction, and completely surrounds and mistreats you as a magnificent ignorant useful fools' tool.

    Which brings one back to a number of Albert Einstein's thoughts, with one/some one all too obviously true and others definitely not so widely well known, but becoming every day considerably more evident and inconvenient to deny and do vain battle against. ....... "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former." and "Imagination is everything. It is the preview of life's coming attractions and is more important than knowledge"

    I wonder what wonders today will bring out into the open to play?

  4. Fred Flintstone Gold badge

    "the protection mechanism must not depend on attacker ignorance"

    That strikes me as basically a variant of Kerckhoffs' 19th principle for cryptography.

    1. Anonymous Coward
      Anonymous Coward

      Re: "the protection mechanism must not depend on attacker ignorance"

      I missed a word, sorry, I meant to say "19th century".

      A clear case of PBC (Posting Before Coffee).

    2. amanfromMars 1 Silver badge

      Re: "the protection mechanism must not depend on attacker ignorance"

      Fred, is there any possible defence against live applications of remote virtual reprogramming with roots and sources au fait in practice with deployment/employment/enjoyment of results entertaining Shannon's maxim. ‽ .

      The posit here being .... No. None.

      And possibly something entertained by El Regers contributing thoughts to this thread, too.

      That would be an AWEsome Ministry with AIMODified Code Altering Behaviour Resulting.

      EMPowered to Proceed and Process Progress in Exploration and Exploitation of Interesting Autonomous IntelAIgent Chain Reaction Cycles/Churns of Creative IT ACTivity ......... NEUKlearer HyperRadioProACTive IT Energy Live BetaTesting Immaculate Virtual Circuits ....... Connected to Dream Central Intelligence Commands and Controls.

      And that's only to begin with ....... who knows what else there be connected and connecting live through there and those? :-)

      1. itzman

        Re: "the protection mechanism must not depend on attacker ignorance"

        Mornington Crescent?

  5. Will Godfrey Silver badge

    I found a good way to test a program

    Write a school timetabling suite and hand it over to the teachers. Make sure it has an accurate detailed user guide, as well as in-program help.

    They'll still find a way to screw up - the most common one, finding multiple unique spellings for the same pupil's name!

    1. Claptrap314 Silver badge

      Re: I found a good way to test a program

      Rule #1 of programming: The user is your attacker.

      Those names are entered by the administration, and are non-editable by the teachers.

      Problem #1 solved. Next?

  6. EnviableOne Silver badge

    problem stems from lazy programming

    All these libraries and components, I'm all for not re-inventing the wheel, but to use someone else's wheel design, doesn't mean you have to bring along the whole cart full of holes with it...

    1. Logiker72

      Not Always

      If an application developer uses pcre to check the input for being correct(read: secure), that is proper engineering.

      If the pcre developers use C and have bugs in their code, that says something about the pcre devs.

  7. Logiker72

    Stop Using C and C++

    70% or more of the exploitable CVE bugs are related to the "undefined behaviour" which comes from the C and C++ languages. A simple index error in the kernel will often yield total control to an attacker. We had "ping of death" and "gethostbyname() kernel takeover".

    Face it, all human programmers make mistakes, because they are tired, sick from the flu, had a squabble with the wife. etc. There will always be these 70% of bugs if we continue to use C and C++.

    Mathematical proof is too expensive/unheard of for most application fields, so we can rationally exclude that option.

    Rust, Swift, Java, C#, Vala and some others are the way to go.

    Strong typing both at compile and runtime and we can eliminate 70% of bugs !

    Here is Tony Hoare saying the same thing:

    1. Logiker72

      Non C OS Kernels

      As the C fans will claim that all OS kernels must use C, here is a small list to prove otherwise:

    2. Claptrap314 Silver badge

      Re: Stop Using C and C++

      You have some sort of study to back that 70% claim? May I see the methodology?

      You cannot fix stupid or lazy, and writing code that actually meets an interesting spec is hard.

      I certainly agree that sloppy programming like while (a++ = b++) {} should have never seen the light of day, the problem is not the language. It was stdlib, which promulgated a dangerous data type into an unsuspecting world. Culture is everything, and claiming, "the world would be so much better if we just changed tools" is the province of daydreamers & dictators.

      1. Logiker72

        Re: Stop Using C and C++

        Several people/orgs (including Microsoft) have made a statistical analysis of the CVE database and found that about 70% of exploitable bugs would not happen in a memory safe language. Mozilla invented Rust for the same reason.

        You are free to do the same.

        Finally, your attempt at godwinning the discussion is not ingenious.


  8. Logiker72

    More Good Habits

    + formally defined data formats (e.g. EBNF, Regex)

    + strict scanners+parsers instead of "error tolerance"

    + integer types which generate exceptions on under- and over-flows

    + sandboxing apps. Why does Word (and the word virus you just contracted) have access to you engineering files ? AppArmor, Sandboxie, Apples sandbox, Linux security modules, ...

    + K.I.S.S. principle. The less features your SW has, the less bugs it will prolly have

    + fomally verified, minimalist OS kernels such as seL4. A bug in the tcp stack corrupts only the TCP subsystem and does not result in a takeover.

    1. Claptrap314 Silver badge

      Re: More Good Habits

      I agree that the whole, "be generous in what you accept and strict in what you admit" was a dangerous rubric from the get-go, and certainly undefensible by the time that m$ was rampaging across the industry.

      I have often stated that the lack of trivial overflow detecting is a major wart with K&R. Not irreparable, however.

  9. midgepad Bronze badge

    Why is Fortran so very fast?

    (I've used it, punching cards by hand, but not recently)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like