back to article Linus Torvalds ponders limits of automation as kernel release delayed

Linux kernel development boss Linus Torvalds's prediction that Linux 5.17 would be released this week "unless something surprising comes up" has come to pass. Not in the good way. One surprise was CVE-2021-26401: AMD's Spectre v2 mitigation in the kernel was found to be potentially inadequate on certain systems – it was …

  1. bazza Silver badge

    Grsecurity

    It'll be interesting to see what thanks Grsecurity gets from the rest of the community. There are those out there who have previously displayed signs of wanting to see Grsecurity perish.

  2. Anonymous Coward
    Anonymous Coward

    Automated Testing

    I'm with Linus - there's no substitute for real testing, and the more the merrier. The problem with automated testing is that it tests only those scenarios the test author dreamt up. And, unless they have a particularly fevered level of genius, they can't hope to cover off absolutely everything. That's were letting it loose on the real world, at least a little bit, works because there you'll find a whole new level of idiot (e.g. someone like my self...)

    (anon, obvs).

    Perhaps less so for kernel testing (I've not done any kernel debugging, so I don't know), but for heavily async systems there's really no substitute for real world testing. You can end up with systems that are perfectly correct, but never quite execute in the same order repeatedly. This can make test vectors and expected results tricky to compose...

    1. Skiron
      Holmes

      Re: Automated Testing

      The automatic testing is human thing. I expect the kernel devs are one of the best in this area, but after a while it becomes 'normal' and relied on without a thought.

      1. bombastic bob Silver badge
        Devil

        Re: Automated Testing

        ack on the "it becomes normal". Though it's great to have an automatic test suite ready to spot regressions and obvious flaws, how practical would it be to add tests for Spectre-like vulnerabilities?

        I'm also a bit curious as to how such things COULD be exploited, so long as you're not running client-side scripting or other code (NOT hosted on your own system) that might leverage it...

      2. spireite Silver badge
        Coat

        Re: Automated Testing

        Problem is, the vast majority of testing is done by Privates, not Colonel's.... so they are even more likely to miss the 'of the beaten track' tests.

    2. ColinPa

      Re: Automated Testing

      Having 100% of the automated tests working, just means that known problems have been fixed.

      We upgraded our machines, to have more and faster engines, and the tests found more problems, such as timing windows.

      If you do not build in randomness the outcome is predictable. If you introduce randonmess, such as time between events, number of tasks running, size of the peaks and troughs etc, each run becomes unique. This makes it a devil to rerun and check any fixes.

      Running the "golden path" (where all test work) doesn't prove much. You need to go "off piste". You do not test drive a car just by driving it up an empty motorway. You go down side roads, up farm tracks(at speed) etc.

      As a product manager said to me "We are not interested in hearing that all tests have run successfully. We want to hear where it broke, why it broke, and how to fix it. If you haven't broken it - you are not pushing it hard enough"

      1. ComputerSays_noAbsolutelyNo Silver badge

        Re: Automated Testing

        I absolutely agree with the bit on the "golden path" and "off piste".

        I think, that this is a scenario where automation can be utilized perfectly well. From my background and line of work, I am nothing but an interested layman in this field; but I find the concept of fuzzing quite fascinating.

        While real-life work-loads are absolutely necessary for testing, bombarding the code with all sorts of sensical (?) and nonsensical input data in an automated manner is something that adds to the testing toolbox, rather than automating a test/tool that would have otherwise been done manually.

        Or was fuzzing a thing prior to automated testing?

      2. oiseau
        Thumb Up

        Re: Automated Testing

        If you do not build in randomness the outcome is predictable.

        If you haven't broken it - you are not pushing it hard enough"

        Indeed ...

        +1 to that.

        O.

        1. Charles 9

          Re: Automated Testing

          Oh? There's such a thing as pushing things too far. Finding out it broke is one thing, but how do you explain to them when you come back with nothing but a few charred bits?

      3. Anonymous Coward
        Anonymous Coward

        Re: Automated Testing

        This echos a completely separate discussion I had over the weekend, about theory and the "scientific method". You never prove a theory - any experiment or test that demonstrates the working of a theory just fails to disprove it. We only, truly, advance knowledge by showing out previous understanding was wrong; testing with changed parameters may advance the weight of probabilities that a theory is valid, but it doesn't prove it. Good theories stand up to intense scrutiny and become valuable tools to predict future events, and often get refined, but they can never be proven beyond all doubt.

        I like the approach that testing isn't finished until you've broken it. Of course, that that to the extreme and nothing gets released. By definition, we will always be surprised when a black swan appears, but, by understanding that everything is a work in progress, we can be better prepared for when things break.

        1. Anonymous Coward
          Anonymous Coward

          Off-topic - Re: Automated Testing

          I hate the term "black swan" when it's used to describe something surprising. Cygnus atratus is native to Australia, and has been introduced as an ornamental bird around the world. What a "black swan" event should mean is "happens all the time in some parts of the world" rather than "a super-rare event we hadn't thought of".

          https://en.wikipedia.org/wiki/Black_swan

          1. Anonymous Coward
            Anonymous Coward

            Re: Off-topic - Automated Testing

            " What a "black swan" event should mean is "happens all the time in some parts of the world" rather than "a super-rare event we hadn't thought of"."

            Yep - here in NZ a (very) rare event is more of a "white swan" event - black swans are a major pest that need machine gunning from helicopters.

      4. Anonymous Coward
        Anonymous Coward

        Re: Automated Testing

        This is spot-on.

        An old, wise test engineer once told me: eventually the product gets good at passing your automated tests. So don't just run the same test suite and call it a day.

        They also said it takes a human to really bollocks things up, but I don't think that was meant only about test engineering ....

  3. Robert Grant

    > "Anyway, let's not keep the testing _just_ to automation," he suggested in his weekly kernel progress update. "The more the merrier, and real-life loads are always more interesting than what the automation farms do. So please do give this last rc a quick try," he added.

    I don't think this is "pondering the limits of automation". Release held up a little. Giving more time for automation and suggests manual testing. Somehow intellectually can risk assess massive amounts of C code merging in and whether it will continue to fail to break the billions of devices using that code.

  4. ThatOne Silver badge
    WTF?

    New Releases?

    > pondered sending version 5.17 out the door regardless

    Pray tell, what does trigger kernel releases? Specific calendar dates, or being reasonably certain they are fit for service and free of problems? Sorry, genuine question here, I fail to understand how those people think.

    The way I see it, they add new features/fixes, and then they try to make sure they didn't break anything and that everything works indeed as expected. When and if this is achieved, they release. It's not like they have marketing breathing down their necks.

    Now I'm apparently witnessing a devil-may-care system where you routinely release a new version each time t, no matter if it has issues, we'll fix them later on, eventually. For apparently no special reason (like aforementioned marketing) except routine.

    I'm not dissing the kernel developers here, I'm just genuinely wondering (and slightly horror-stricken).

    (Go on, let your corporatist feeling run free and downvote. I'll understand.)

    1. Gene Cash Silver badge

      Re: New Releases?

      Actually no. As the article states Linus thought about releasing "on time" but didn't have a warm fuzzy, so he felt it was better if he didn't, and we went with a bit more testing.

      1. ThatOne Silver badge

        Re: New Releases?

        > Actually no.

        Yes, he did the right thing, but I wonder why or how not doing it could even be envisaged.

    2. Androgynous Cow Herd

      Re: New Releases?

      That's why it's called "Open Sores" software.

    3. Richard 12 Silver badge

      Re: New Releases?

      The longer you wait between releases:

      1) The more users are affected by the things you've fixed that were bad/missing in the previous release.

      2) The more changes are in each release - and thus the greater the chance one of them broke something that hasn't been spotted.

      3) The more difficult it is for users to roll back to the previous release if this one broke something - they'll have to give up more other fixes and features, so the decision is made more difficult.

      4) The more changes get added as "just one more thing...", both from product management/sales wanting their pet feature/fix, and developers thinking "I'm in this bit of code anyway...". The ship date then slips forever and it never releases.

      In general, the best approach is to set a fixed cadence of releases, and then *remove* any fixes or new features that aren't good enough as that date approaches - postponing them to the next scheduled release.

      In some rare cases, the almost-baked new feature/fix is really important. In those situations you delay for that one specific thing - for a predetermined length of time - and nothing else.

      And if it doesn't actually get baked in your over-run, you pull it back out and release without it anyway.

      1. ThatOne Silver badge

        Re: New Releases?

        > The longer you wait between releases

        Thanks, but I really don't think delaying for a week or two would trigger #2 and #3.

        As for #1, it only applies in cases where you have already released a buggy kernel... Which in the decades I've been using Linux didn't happen all too often, normally you can wait an additional week for those new shiny features the new kernel brings. (Also releasing a buggy kernel to fix another buggy kernel sounds sub-optimal to me, but then again I'm no developer... :-p)

        #4 is an issue I know very well, but that's more a project management issue. If you delay release to fix a bug you shouldn't (IMHO) let people add new features, sorry the code is frozen, they should only work on fixing the bug. IMHO once again.

        .

        > In those situations you delay for that one specific thing - for a predetermined length of time - and nothing else.

        Yes, that sounds reasonable. I'm just bothered by the "predetermined time" part. Indeed, if it seems to take forever this new feature is probably not ready to be released yet, and the best solution would be to can it and let people work on it a little longer, while the rest of the kernel progresses normally, But I think some flexibility in time should be possible, given there is no commercial "time to market" diktat over your heads.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like