back to article Linux royalty backs adoption of Rust for kernel code, says its rise is inevitable

Some Linux kernel maintainers remain unconvinced that adding Rust code to the open source project is a good idea, but its VIPs are coming out in support of the language's integration. In an ongoing thread on the Linux kernel mailing list, Greg Kroah-Hartman, a senior project developer, for one urged fellow contributors to …

  1. beast666 Silver badge

    The enshitification of Linux commences.

    1. Mostly Irrelevant

      I predict you've never written a line of Rust and are likely not even a programmer. Rust is well designed, modern language, and the memory safety is a great feature.

      1. Anonymous Coward
        Anonymous Coward

        > Rust is well designed, modern language

        It has yet to gain a complete formal definition..

        Given the Rust zealots decided it was okay to ignore the language specification and only now embarking on the creation of a formal language definition, gives the lie to this statement. As does the response for the zealots that your Rust code is valid if it compiles using their (singular) compiler.

        Rust may be modern, but so is Windows 11…

        1. Anonymous Coward
          Anonymous Coward

          re: Rust may be modern, but so is Windows 11…

          The phrase you were responding to was "modern, well designed". I notice that you don't take issue with "well designed".

          I think I'll give a little more respect to Linus' opinion on this one ta.

          1. Anonymous Coward
            Anonymous Coward

            Re: re: Rust may be modern, but so is Windows 11…

            The formal language definition is a key design deliverable for any “well designed” programming language.

            Your response makes it clear you have zero understanding of the programming langauge design process.

            The wry laugh is going to be when the team currently writing the formal specification fork and produce a formal specification that hasn’t had to be compromised to satisfy the vagaries and exceptions of the current Rust language and compiler. I suspect that will be a good well designed and modern language…

            1. Anonymous Coward
              Anonymous Coward

              Re: re: Rust may be modern, but so is Windows 11…

              I pointed out you didn't mention "well designed" so thanks for coming back and actually expressing an opinion on that.

              No need to be a dick about it though.

              1. Anonymous Coward
                Anonymous Coward

                Re: re: Rust may be modern, but so is Windows 11…

                Sorry, it's just been a shitty morning and I'm lashing out because of that. It's not you, it's me.

                I'll try and post more constructively in the future, thanks for pointing out my dickishness!

            2. containerizer

              Re: re: Rust may be modern, but so is Windows 11…

              > The formal language definition is a key design deliverable for any “well designed” programming language.

              No true Scotsman ..

            3. Phil Lord

              Re: re: Rust may be modern, but so is Windows 11…

              The big advantage of a formal specification is that it allows you to build multiple competing implementations of the language.

              Absolutely critical at one point, because you wanted to avoid vendor lock in. But, if you have a freely available implementation targetting a wide range of platforms, it is less clear the big advantage to me.

              I am sure having another pair of eyes go over the current implementation to write down a formal specification will improve things. But, then, most formal specifications are also improved substantially by having one or more implementations.

              There are now quite a few languages in very wide use that have no formal specification or have developed one well after having an implementation. I think that the "programming language process" is less doctrinaire in practice than you think.

              1. Roland6 Silver badge

                Re: re: Rust may be modern, but so is Windows 11…

                Whilst I would agree you do not need to go full BNF, before starting to write anything, a formal definition/specification is really good for communication and testing.

                Whilst the K&R white book does contain irregularities and undefined behaviours for the libraries, it could and was used as a specification; which many did to write compilers and other tools such as LivingC. These exposed the differing interpretations and encouraged clarification (eg. Move the file pointer beyond the end of a file, what should the result be? I assume in Rust the programmer would need to be explicit as their intent as depending on circumstance this could be either valid or invalid.)

                Thus we are in agreement that “ formal specifications are also improved substantially by having one or more implementations.” which in turn should result in improvements to the compiler and thus code written in that language.

                >” I think that the "programming language process" is less doctrinaire in practice than you think.”

                I get your point, but I would suggest the lack of formal definitions/specifications prior to coding, suggests a lack of thought, and a degree of laziness in not updating such definitions as a result of coding. Given Rust is trying to implement good practise from old languages, I would expect to see more focus on design to enable proper testing that good practise really has been captured and correctly embedded.

                1. Phil Lord

                  Re: re: Rust may be modern, but so is Windows 11…

                  What you are describing is exactly the process that Rust does use for all changes and has used for about the last decade or so. The change is discussed in an RFC with a specification of behaviour. Over time an implementation happens, the Rust documentation is updated, it's moved to nightly, or is made available behind a "feature gate" (that is you have to turn it on by hand if you want it). So testing happens, often internally inside the rust implementation.

                  And eventually it gets merged to daily and turned on for all usage, waiting for an edition change if it is backward compatible. So, I think all that is happening.

                  They are building a full formal specification not to change this process which always works, but for those areas such as a safety critical systems, where it is explicitly needed. The formal specification will lag behind which is fine, because in those areas where certification happens, it will mean the implementation is well tested before adoption.

                  It is not that much different from C, if I understand the C process properly. There is a standard C with a specification. But, in practice, the compilers implement new features first and people start using them well before the release of an updated specification. Again, if I understand correctly, projects like Linux cannot be compiled with a fully standardized C; just like Rust for Linux, actually, which is also using unstable language features.

                  1. Roland6 Silver badge

                    Re: re: Rust may be modern, but so is Windows 11…

                    >They are building a full formal specification not to change this process which always works, but for those areas such as a safety critical systems, where it is explicitly needed.

                    If the Rust Foundation are serious about prime time usage of Rust, the "full formal specification and language reference" is needed regardless of application area. I use quotes because I suggest as with C a full informal specification, in the form of K&R white book (78), is a good starting point, as we can expect it to take some years before a formal standard can be released - C89 being the first full Ansi C release, but K&R had served C well in those years.

                    Additionally, the current Rust RFC revision process will need to be formalised, especially with respect to the version of Rust being used for prime time developments. Prior to Trump, I would have suggested dropping Rust on ANSI, but the medium to long-term capabilities of US government financed agencies can no longer be assumed. But we do seem to be better at running "public good" Foundations and Alliances now we have several decades of experience of working together on open source and standardised market offerings compared to the 1980's.

                    >Again, if I understand correctly, projects like Linux cannot be compiled with a fully standardized C

                    My work in the 1980s on x86 indicated that most compilers were targetted at producing application code and thus omitted the extra bits needed for an OS to utilise the "system" instructions of the 286/386 etc. So I would assume much is the case today.

                2. Blazde Silver badge

                  Re: re: Rust may be modern, but so is Windows 11…

                  I think the cultural focus on correctness and safety overrides some of your concern. The RFC process is not all that formal but it is very practical, it takes advantage of the large Rust community to test ideas, and there is a powerful, often frustrating, reluctance to avoid implementing things in ways that might be later regretted (partly because of mistakes made in C++ despite the tortuous committee process).

                  Rust also benefited from an approximately 10 year 'incubation' period where major features were torn up and overhauled routinely. It's great that it's more formal now but I think these organic origins (which C also had, incidentally) are more important in a language's formative years.

                  The other huge advantage Rust has in formality is simply that it that builds on lessons C & C++ have already learnt. "Move the file pointer beyond the end of a file, what should the result be?" - exactly what happens in C. Technically that's not guaranteed but I don't see it changing. How do Rust atomic works? Exactly like C++ atomics, despite it's flaws, because there isn't enough confidence in being able to do better. It stands on the shoulders of giants so isn't forced to deal with trivial library issues that have typically been nailed down for decades already.

                  In my experience when it does it get it wrong, it's always obvious what should have been correct, there have just been limitations in propagating checks & bounds in highly contrived and complex scenarios that are unique to Rust. It's learning it's new lessons in it's own ways, not repeating past mistakes.

                  1. Roland6 Silver badge

                    Re: re: Rust may be modern, but so is Windows 11…

                    >Rust also benefited from an approximately 10 year 'incubation' period where major features were torn up and overhauled routinely. It's great that it's more formal now but I think these organic origins (which C also had, incidentally) are more important in a language's formative years.

                    Agree, as said previously, we are now at the stage where Rust is probably sufficiently there and stable for it to be released for prime time, thus it really needs the support collateral and processes to support and enable that to happen. ie. where is my Rust "White Book"?

        2. Dostoevsky Bronze badge

          ...formal spec...

          C's formal spec is "everything is undefined behavior." Those who live in glass houses shouldn't cast stones.

        3. Groo The Wanderer - A Canuck

          Formal specifications are highly overrated during the initial development of a new language, and Rust is barely out of infancy at this point. While it has been around for a few years, it is very much a "junior" language compared to C, C++, COBOL, et. al. But it is a better planned and intended language, for all the gaps and holes in the current implementations there may be. It is only now that things have been stable enough to even begin a formalization process.

          Remember: Waterfall/spec-first development hasn't been an "in" thing for 30+ years...

          1. Roland6 Silver badge

            > Rust is barely out of infancy at this point.

            So not ready for prime time and thus the concerns of the Linux developers who are working on something that is used for prime time are valid.

            1. Phil Lord

              I don't think many Linux developers have suggested that, though. The main complaint is whether a second language, any second language, is good at all.

        4. FIA Silver badge

          It has yet to gain a complete formal definition..

          I'm probably wrong here, but it seems like C started in around 1974, and got specified in ISO9899 around 1990. (And if my memory serves, compliance to that spec was patchy well into the 2000s)

          Given the Rust zealots decided it was okay to ignore the language specification and only now embarking on the creation of a formal language definition, gives the lie to this statement. As does the response for the zealots that your Rust code is valid if it compiles using their (singular) compiler.

          As oppose to the bold C pioneers who diligently wrote a spec first and then stuck to it?? I assume my memories of differences between a variety of C compilers around the turn of the century is just wrong then? (Microsoft's compiler was a particularly good example at the time, though much improved now).

          A thing can be 'well designed' without being specified formally first, just as the opposite is true; there's many examples of this. You're criticising the process, which unless you're going to step up and do it differently is kind of pointless. If you want to criticise the outcome (by pointing to areas of design you consider are poor) that would probably be more helpful.

          Also, consider it in the context of the thing you're defending. People aren't trying to replace C/C++ for no good reason. C and C++ (and many other languages) have their own set of issues; that's just life. Nothing is perfect, but demanding a perfect replacement or none at all is counter productive.

          Or... maybe... just go and learn it.. you might find it's not that bad after all?

          I started as a C programmer, and Rust is one of the hardest languages I've tried (and am still very much trying) to learn.

          This isn't anything to do with the language syntax, many languages have 'quirks' (I have to go read up on PERL every time I try to use it..) it's more to do with the language forcing me to think in an ever so slightly different way than I've had to do in the past.

          The lifetime/borrowing stuff (as an older C/Java/C# coder) is new and difficult; but I think it's also making me a better programmer, as I've had to subtly change the way I approach certain problems.

          It's also by far the most fun I've had learning a new language in many years.

          FWIW, to the point of worry that my existing skills will become less useful. That's what I'm intending to retire on. There's a lot of codebases written in old technologies that are still happily powering the world, as the skills in those decline the cost of someone who has them rises.

          1. Roland6 Silver badge

            >I assume my memories of differences between a variety of C compilers around the turn of the century is just wrong then?

            there were plenty of differences between compilers in the mid 1980's, although the differences between differing flavours of Unix, given they were porting the same code were less marked than the differences between the various PC/MSDOS ones and their libraries. The differences were sufficient to require development projects to select one compiler and adopt it across the board, as mixing code from say Aztec C with Microsoft C was not going to end well.

            >consider it in the context of the thing you're defending. People aren't trying to replace C/C++ for no good reason.

            I'm not defending C as a wonderful perfect language, just defending it with respect to the form it was in when it hit prime time in the 1980's and comparing and contrasting that with Rust, likewise other widely used languages: ADA, COBOL, Fortrran, Pascal, Algol-60, Algol-68 etc. hence why I find Rust to not really be ready for widescale adoption and prime time usage.

          2. Roland6 Silver badge
            Pint

            cont.

            >I started as a C programmer, and Rust is one of the hardest languages I've tried (and am still very much trying) to learn.

            ...it's more to do with the language forcing me to think in an ever so slightly different way than I've had to do in the past.

            One of my university lecturers regarded the teaching of procedural languages as first (programming) languages, as a form of wilful brain damage, as it made learning functional programming much more difficult as you had to learn new ways and paradigms; So the beer is to help refresh you on your learning journey.

        5. ryokeken

          christian language doesn't belong in the code conversation

          christian language doesn't belong in the code conversation

      2. karlkarl Silver badge

        > Rust is well designed, modern language, and the memory safety is a great feature.

        And if the Linux kernel was 100% Rust then you can benefit from these great features.

        But unfortunately the fact that it will no longer be a homogeneous codebase, undermines all of the benefits you stated.

        ... But this whole Rust argument has been done to death. Lets just wait and see. If this experiment works, it works, if it doesn't it doesn't. Opinions are split almost 50:50. Either way, the BSD community will be very welcoming to skilled developers who dislike Rust and are looking elsewhere.

      3. Anonymous Coward
        Anonymous Coward

        > I predict you've never written a line of Rust and are likely not even a programmer.

        Quite probably not. Beast666 is an account pretty much dedicated to obvious, low-rent trolling and I doubt that they even care about the issue itself here.

        I suspect that much of the satisfaction and/or usefulness of a troll like that *is* that they're so obvious and low-rent, yet people can't keep themselves from feeling obliged to reply seriously to even the most obvious attempt to push their buttons, dignifying it in the process and legitimising its use as an excuse to push the discussion in that direction.

    2. ryokeken

      ok boomer

      good bye boomer

  2. klh

    I wonder if COBOL programmers behaved like C ones are today.

    1. Anonymous Coward
      Anonymous Coward

      Definitely.

    2. An_Old_Dog Silver badge
      Joke

      Something to Ponder

      An operating system written in COBOL.

      Is COBOL memory-safe?

      1. Peter Gathercole Silver badge

        Re: Something to Ponder

        I would say that as COBOL, at least as I knew it years ago, without pointer types or casting, and with strict typing and array boundary checking, was a lot more memory-safe than C.

        Whether you could or would want to write an OS in it is another question entirely.

        1. An_Old_Dog Silver badge

          Re: Something to Ponder

          In my uni days, I wrote a short demo program in FORTRAN IV which changed the value of a constant, such that 2+3 had the value of 4. I hadn't thought to try it with COBOL.

          1. Anonymous Coward
            Anonymous Coward

            Re: Something to Ponder

            Declaring a constant called the same name as a literal, with a value different from it's literal value is probably possible in all languages. That does not make it a memory-unsafe operation.

    3. GNU Enjoyer
      Trollface

      Maybe I should learn to write in GNU Cobol

      To add to the GNU C.

  3. Anonymous Coward
    Anonymous Coward

    Veteran C and C++ programmers, however, are understandably worried...

    "Veteran C and C++ programmers, however, are understandably worried their skills could become less relevant."

    I've been writing code over 53 years. I demand that no one learn anything new, and we go back to abacuses because, damn it, they are the oldest computing devices. And unless the rods break, they are memory-safe! Old school should be the only school. If they were good enough for the Sumerian's in ~2500BC, they are good enough for you today!

    1. jake Silver badge

      Re: Veteran C and C++ programmers, however, are understandably worried...

      I use an abacus near daily. It's the only calculator I've found that will last in the feed barn. Sliderules gum-up something fierce with all the various plant sugars involved ... but the ol' abacus is self-cleaning.

      As a side note, I am not even remotely worried about my skills becoming less relevant. The concept is a joke as far as I'm concerned. Good coders can find work in any language, and they know it .

      1. A.P. Veening Silver badge

        Re: Veteran C and C++ programmers, however, are understandably worried...

        Good coders can find work in any language

        Two minor issues with that:

        Most coders are only mediocre at best.

        Most coders know only one language.

        I'll happily mix languages within one project if/when that gives the best results.

        1. jake Silver badge

          Re: Veteran C and C++ programmers, however, are understandably worried...

          But are you a good coder, or a "most" coder?

          1. A.P. Veening Silver badge

            Re: Veteran C and C++ programmers, however, are understandably worried...

            As I've worked with multiple languages and usually get an excellent or better on my reviews, I think I am pretty good.

    2. bombastic bob Silver badge
      Facepalm

      Re: Veteran C and C++ programmers, however, are understandably worried...

      Veteran C and C++ programmers, however, are understandably worried their skills could become less relevant.

      I get the snark, but SERIOUSLY, who actually THINKS like this...?

      MY main concern is "too many compatibility layers" and the impact on **PERFORMANCE**

      AND: the NUMBER ONE bad thing: that instead of shoehorning Rust to fit a C language Linux, the C CODE GETS SHOEHORNED TO FIT **RUST** and its GARBAGE COLLECTED BOUNDS CHECKING ANTI-PERFORMANCE BLOAT!!!

      If you want blistering speed, you do NOT re-check bounds as you call more functions. The functions MUST assume that ALL parameters are SANE!!! And COMPROMISING EFFICIENCY inside of a kernel is "The Micros~1 way" and HAS been since Windows 3.1 with "Oh let's just assume CPU's will get faster and encourage UPGRADING to fix OUR code issues..."

      1. Adair Silver badge

        Re: Veteran C and C++ programmers, however, are understandably worried...

        A living organism that fails to adapt and evolve is generally a dead organism.

        OS code is not that different.

        The only real question is: is the adaption/evolution advantageous, or (as in a mutation) likely not?

        1. Anonymous Coward
          Anonymous Coward

          Re: Veteran C and C++ programmers, however, are understandably worried...

          All adaptations are mutations fhat succeed thanks to the processus known as natural selection.

          1. Anonymous Coward
            Anonymous Coward

            Re: Veteran C and C++ programmers, however, are understandably worried...

            That is not true in all cases. In the case of humans, natural selection is largely absent in the equation, due to the safety barriers of various sorts humans/human society has erected.

            As a friend of mine said, "Stupid kills ... but not near enough."

          2. Adair Silver badge

            Re: Veteran C and C++ programmers, however, are understandably worried...

            That is technically incorrect: a 'mutation' is a genetic change caused by direct 'damage' to DNA, e.g. cosmic ray, toxic chemical, etc. Almost all 'mutations' have a negative impact, or are neutral at best, very exceptionally a mutation has an advantageous effect.

            Whereas 'evolutionary adaptation' is the result of external stimuli that causes no direct 'damage' to DNA, e.g. environmental change that provokes genetic change to at least compensate for, or, more productively, take advantage of the change.

            1. that one in the corner Silver badge

              Re: Veteran C and C++ programmers, however, are understandably worried...

              > environmental change that provokes genetic change to at least compensate for, or, more productively, take advantage of the change.

              Bullshit.

              Genetic change comes from mutation - which includes copying errors, not just "direct damage" from Cosmic Rays, non-Cosmic rays, interloper chemicals and so forth. True, the number of mutations that give advantageous results is low - BUT there are LOTS of mutations occuring in individuals and (in a population that is able to evolve) there are a LOT of individuals doing a LOT of reproducing. Of the rest, the majority are not harmful to the organism - not enough to be of any concern w.r.t. just getting on with the job of living - and reproducing. These LOTS and LOTS and LOTS all multiply together to give - a very slow process of change!

              Oh, and add in changes due to sexual reproduction (i.e. a mix from both gives a new, unique, combination). Which speeds the whole process up and was a very advantageous change for the lucky blob whose forebears accumulated the mutations to allow it to happen.

              To reach the final individual, add in changes in expression due to epigenetic factors (including, but not limited to, chemicals from the environment - but not all chemicals are toxic!). But those are not in any way "provoking genetic change": the 'ability' to react to that feature of the environment already existed, it just happens to be being expressed today. That expression may even be in terms of which genes get passed onto the offspring - but I do not have any examples of that, so, wild surmise, if David Attenborough hasn't told us of it, it isn't occuring enough to be one of your cornerstones of evolution.

        2. Anonymous Coward
          Anonymous Coward

          Re: Veteran C and C++ programmers, however, are understandably worried...

          Organisms don't evolve. Only populations can evolve. The fate for individual organisms is death.

          Mutations are advantageous or not. The ones that are may lead to a better fit to the environment the population is in at the moment. Those changes become adaptations to that environment. Adaptations are always advantageous. Not all changes are adaptations, they may just be not too disadvantageous at the moment but can still be selected out.

          To be equivalent, OS code would have to have many different branches (organisms in the population) actively developing and being used simultaneously.

          OS code does not evolve. It is the examplar of Paley's Watch: i.e. intelligently designed. We hope.

          1. MonkeyJuice Bronze badge
            Stop

            Re: Veteran C and C++ programmers, however, are understandably worried...

            Organisms don't evolve. Only populations can evolve. The fate for individual organisms is death.

            This is patently false, and has been well accepted for the last 30 years.

            This isn't a new discovery, so there really isn't any excuse for blathering this tripe on here.

            Please don't make blanket statements on the internet when you don't know the subject, it just lowers the internet's IQ even further than it is.

            1. Peter Gathercole Silver badge

              Re: Veteran C and C++ programmers, however, are understandably worried...

              I think that you have to be a bit clearer about when Horizontal Gene Transfer can occur.

              It mostly happens in single-celled organisms like bacteria, and the reason for this is that their reproduction is by by mitosis (splitting of the cell with the duplication of the genes), so a change to the DNA through HGT will be passed on to any offspring. Even the article you point to makes this clear.

              For more complex organisms, while HGT can occur, and does very frequently when you're thinking about how viruses corrupt the function of a cell to produce more instances of the the virus, the eventual fate of the corrupted cell is probably cell death.

              There are cases where HGT may be a cause of cancerous cells, whereby a cell's DNA is corrupted and then reproduced as a cell divides, but this is nearly always limited to the organism infected, i.e. it will not be passed down to offspring.

              With more complex organisms that reproduce sexually, the only way to pass a genetic change down to offspring is to change the DNA makeup of the oocyte or spermatocyte, such that the DNA passed on to the offspring includes the changes.

              Again, I'm sure that there are provable cases where this has happened, but most changes will be caused by faulty gene replication during the production of the gametes, whether by radiation or chemical alteration of the process. It may also be the case that external influences while a zygote is still in the low cell count stage could alter the DNA of one or more cells that get passed into the resultant gene set of the offspring. But I think that this being caused by HGT is very, very unlikely. Most changes like this actually don't end up with viable offspring, and those that do do not always provide a positive outcome for the offspring, reducing the chance of the change persisting into further generations.

              I am, however, intrigued by the Wikipedia article saying that HGT happens in Tobacco plants. I must look that up.

              One earlier comment brought up the point that the evolutionary sieve has been somewhat eliminated in the human population. I would agree with this, as many conditions that would adversely affect people with negative changes are treated by medicine, or supported by social care systems, allowing people affected by negative changes/mutations to procreate, and pass their genetic state down to their offspring, something that probably wouldn't happen without our society. I'm not going to make any comment about whether this is a bad thing, but I'm sure it is happening.

      2. Anonymous Coward
        Anonymous Coward

        Please stop shouting

        Seriously, have your caps lock key checked out, it seems to engage far too often.

      3. WhoKnowsWhoCares

        Re: Veteran C and C++ programmers, however, are understandably worried...

        >garbage collected

        lol, lmao even. Rust haters continue to amaze with their puddle deep knowledge of the thing they bash.

        1. Blazde Silver badge

          Re: Veteran C and C++ programmers, however, are understandably worried...

          He's been corrected a bunch of times before, so he knows. It's intentional misinformation. A sign of the strength of feeling on this issue.

      4. An_Old_Dog Silver badge

        Re: Veteran C and C++ programmers, however, are understandably worried...

        Perhaps array bounds-checking is something which ought to be implemented in hardware.

        1. Ken Hagan Gold badge

          Re: Veteran C and C++ programmers, however, are understandably worried...

          It's been tried, in several different ways. No-one has found a way to do it that isn't really slow.

          Besides which, it is fairly easy to do it in software (C is perhaps the only language in common use that doesn't, on account of its age) and in many cases it can be done at compile-time.

          1. An_Old_Dog Silver badge

            Re: Veteran C and C++ programmers, however, are understandably worried...

            No-one has found a way to do it that isn't really slow.

            Or prohibitively-expensive in hardware resources.

            ... As of yet.

          2. trindflo Silver badge

            Re: Veteran C and C++ programmers, however, are understandably worried...

            C (as opposed to C++) is arguably closer to a macro assembler than a high level language. I think the lack of boundary checking has more to do with what C is best at than its age.

            1. captain veg Silver badge

              Re: Veteran C and C++ programmers, however, are understandably worried...

              > I think the lack of boundary checking has more to do with what C is best at than its age.

              May be this is stating the same thing, but for me it was a design decision to make arrays and pointers syntactically equivalent. Which can be handy, but leads to the counter-intuitive result that a[i] is precisely the same thing as i[a] since both are equivalent to *(a + i), which is the same as *(i + a).

              -A.

              1. Roland6 Silver badge

                Re: Veteran C and C++ programmers, however, are understandably worried...

                Yes there are many things you can do with C, which are perhaps best avoided.

                An exemplary collection can be found at https://www.ioccc.org/

                I did have a collection of books such as "The C Puzzle Book" and others from the mid 1980's. We used these as tests for LivingC to confirm we had correctly implemented the language and that our animator correctly stepped through the obfuscated code.

                I hope we will see similar books for Rust... ( iorcc.org is available...)

        2. grumpy-old-person

          Re: Veteran C and C++ programmers, however, are understandably worried...

          If you can find a copy spend a few hours interesting hours with the book "Advances in Computer Architecture" by Glenford J Myers published in 1968 (ISBN 0-471-07878-6) to get an idea of how much research went into implementing safety/security (and other things) in hardware

          IBM SWARD is particular interesting

          The state of technology often was probably the cause for abandoning efforts, but in theses time perhaps a lot of the computing power gobbled up by bloated, pretty software should be redirected to executing safety/security features in hardware

          Also, why not use array-bounds checking in software - I once read that turning the feature off enabled one to get wrong answers as fast as possible!

      5. trindflo Silver badge

        Re: Veteran C and C++ programmers, however, are understandably worried...

        Thanks for getting me to look into the reality of Rust. I tend to tune out when things degrade into snark, and the comment you pointed out certainly qualifies.

        From what I read, Rust is a far cry from a Java-like garbage collector (which essentially allows memory to leak until you run out of memory then makes everything wait while it tidies up). A lot of what Rust does is enforce better practices through the compiler.

        On the other hand, Rust fans seem to sweep bounds checking (that is also an aspect of Rust) under the rug or insist that everyone else does it and you should just get with the program. Bounds checking does have an effect on performance as you say, and this would be compounded if the checking is happening in multiple layers.

        I will say that multiple layers shouldn't exist in a driver that is written for performance. Calling a subroutine imposes a huge performance penalty.

        And now having looked into Rust, I'm of the opinion that it is a tool that is best used where appropriate. If blistering speed isn't required (the DMA routines do seem like a good example), the tradeoffs might well be worth it. It also seems there is a hype aspect that seems to infer you can let an AI or an intern write your device drivers and the magic tool will make it all work out, which isn't a beneficial ethos. It is a tool to use and not a god to worship.

        1. containerizer

          Re: Veteran C and C++ programmers, however, are understandably worried...

          > From what I read, Rust is a far cry from a Java-like garbage collector

          Rust is a programming language, not a memory allocation technique.

          Nothing stops someone from implementing a garbage collector in Rust, or for that matter in C or C++. Horses for courses.

          > which essentially allows memory to leak until you run out of memory then makes everything wait while it tidies up

          That's not fair. Modern GC implementations can be used which do a lot of clever stuff in the background. They can't reduce the delay to zero, but you can tune high-volume applications so that the delays are measured within single-digit milliseconds.

          It's a skillset in itself to understand GC and learn how to measure and tune its behaviour, so I won't downplay the fact that it's overhead. But there's a different skillset needed for managing memory in C. Assuming that you don't have bugs that leak memory, you still need to consider things like memory fragmentation. The memory allocation routines are not instantaneous - they do have to spend CPU cycles looking for blocks of memory in the heap that match the size you requested. Linux/etc allow you to avoid this by allowing essentially infinitely large heaps and falling back on the virtual memory subsystem to deal with it, which works well but is wasteful.

          > On the other hand, Rust fans seem to sweep bounds checking (that is also an aspect of Rust) under the rug or insist that everyone else does it and you should just get with the program.

          I've no idea what this means, but Rust enforces bounds checking at compile time, so I suspect your research is deficient.

          > I will say that multiple layers shouldn't exist in a driver that is written for performance. Calling a subroutine imposes a huge performance penalty.

          Rust doesn't "call a subroutine" (since when was calling subroutines a problem ?). The memory allocation stuff is done at compile time.

          1. trindflo Silver badge

            Re: Veteran C and C++ programmers, however, are understandably worried...

            >> From what I read, Rust is a far cry from a Java-like garbage collector

            >Rust is a programming language, not a memory allocation technique.

            In the post I was responding to there was a comment about garbage collectors. I was saying Rust isn't that and doesn't do that. The register makes it easy to find the post being replied to with the curly arrow to the upper left of a post; I use that quite often to get a better idea about what is being replied to. My comment would look strange out of context.

            >> which essentially allows memory to leak until you run out of memory then makes everything wait while it tidies up

            >That's not fair. Modern GC implementations can be used which do a lot of clever stuff in the background. They can't reduce the delay to zero, but you can tune

            That's true, and what you're saying is the overhead is less noticeable because it is artfully spread out over time. Depending on what is being done, it's also reasonable to just hope the program is done quickly enough that no GC is needed at all and it just gets handled in process run-down. As a gross description of what a GC does, I think my description explains it. Because of that I don't think a GC belongs in a driver or anywhere that timing is important. And again, Rust isn't that.

            > I've no idea what this means, but Rust enforces bounds checking at compile time

            Agreed. Enforcement at compile time is all good. Rust also labels code as unsafe that does not perform bound checking at run time as Phil Lord points out. Naturally someone who doesn't understand all the implications will declare there will be no unsafe code in a given project and it will become enforced. This is a hidden performance cost. I've been wondering how we are somehow getting something for nothing in terms of memory safety, and now I know the answer. As expected, there is no free lunch.

            > Rust doesn't "call a subroutine" (since when was calling subroutines a problem ?). The memory allocation stuff is done at compile time.

            Taken a little out of context. Calling a function at an assembly language level is calling a subroutine. Boundary checking, if performed at several layers of nesting, would compound the costs of boundary checking. As for since when is calling subroutines a problem, there is quite a bit of overhead involved in setting up stack frames. I've measured it and it is much worse than is apparent.

        2. Phil Lord

          Re: Veteran C and C++ programmers, however, are understandably worried...

          Rust does not forceably bounds check. It allows you to choose which you want. The simpler syntax (`i[n]`) is bounds checked. The non bounds checked version uses a method call and is considered `unsafe`.

          Most of the time, however, you just use an iterator. So you just say `for n in i {}`. This does not bounds check as it is provably safe without doing so.

          So, Rust is quite explicit about bounds checking, and does not do it where it is not needed. There is very little or no performance compromise here.

        3. Avalanche

          Re: Veteran C and C++ programmers, however, are understandably worried...

          Spoken like someone with no clue how modern Java garbage collectors actually work...

        4. Blazde Silver badge

          Re: Veteran C and C++ programmers, however, are understandably worried...

          On the other hand, Rust fans seem to sweep bounds checking (that is also an aspect of Rust) under the rug or insist that everyone else does it and you should just get with the program. Bounds checking does have an effect on performance as you say, and this would be compounded if the checking is happening in multiple layers

          Bounds-checking put me off Rust for a short while. It's the only area where performance is routinely sacrificed for safety so for us performance-freaks it feels like a tough compromise to make.

          I encourage you to study the performance impact in practice because that's what brought me around. It's immaterial to non-existent almost all the time for a range of reasons:

          1) LLVM combined with the Rust compiler is just so good about reasoning it will elide most bounds checks, especially those multiple layer ones you're concerned about. The same bound checked twice results in unreachable blocks, and those get culled. 2) Most accesses look like iteration to the compiler, and even when they don't if you the programmer know you're in bounds it's usually because you've implicitly checked bounds somehow already (and would do in any language). If that implicit check is in any way local the compiler will recognise it. If it isn't local and you are going to repeatedly index a container you can perform the most-likely to fail bounds check yourself in an outer block so the compiler culls any inner ones (eg. Coming into a function with a Vec that you know for obscure reasons is always >= 1024 in size, then assert that immediately. It's good practice anyway). 3) Things that have length are generally stored as fat pointers (length+pointer), which means your bounds check often happens for free on the same cache-line as your pointer while the pointed-to memory is being fetched. In rare circumstances where you see a bounds check happening in a super-hot loop you have the option to use unsafe access. I'm yet to need this, but it's nice knowing it's an option.

          Set against that: The times you're guaranteed to get a full-cost bounds check are when you screw up your implicit bounds check with an off-by-one or similar, or you should have checked but didn't. Once you experience this a couple of times you start to realise it's a smart trade-off because you spend less time debugging and produce more-correct code, which is undeniably a performance of it's own.

          The final realisation that caused me to stop worrying and love the bounds check is that Rust's memory abstraction allows some important optimisations that aren't possible in C/C++: 1) Aliasing headaches are gone. Pass a Rust fn two references and if either is mut it knows they aren't the same object and can optimise accordingly. 2) Structs get reordered to save space. 3) Small functions which compile down to the same code can be merged because there's no requirement for every function to have a unique address. 4) Other stuff I'm sure, but you get the idea. These are also small effects but they can easily off-set bounds checking overhead in real code.

    3. bombastic bob Silver badge
      Devil

      Re: Veteran C and C++ programmers, however, are understandably worried...

      I've been writing code over 53 years.

      48, actually. We "Ents" could teach you whippersnappers a thing or two.

    4. Dan 55 Silver badge

      Re: Veteran C and C++ programmers, however, are understandably worried...

      C may be a Sumerian language, but C++ isn't (also why did the article have "and C++" in there as well?) C++ is not C. C++ can be a memory safe language if you follow well-known design patterns.

      This quote however shows the dead-end in which they find themselves now:

      "I'd like to understand what the goal of this Rust 'experiment' is: If we want to fix existing issues with memory safety we need to do that for existing code and find ways to retrofit it. A lot of work went into that recently and we need much more"

      Linus said in 2007 that the C++ wasn't welcome in the kernel, hence you find hand-crafted templates and classes in it written in C. No C++. They are just C programmers, at least as far as the kernel goes.

      Now they've built a rod for their own backs. C++ allows memory safety to be retrofitted, no other language does. They'll have to take another more pragmatic look at the "no C++" rule or end up rewriting huge parts of the kernel in Rust.

      gcc successfully moved to C++ over time, it wasn't completely rewritten. I don't see why the same couldn't be done for the kernel, if they could get over this "rewrite all the things in Rust" idea first.

      1. that one in the corner Silver badge

        Re: Veteran C and C++ programmers, however, are understandably worried...

        > They'll have to take another more pragmatic look at the "no C++" rule ...

        Whilst I count C++ as my favourite language (at the moment, it may change - again) I can sympathise with not wanting it in.

        Recent work has provided memory safe ways of working - especially for new code where you aren't stuck with loads of lines using another idiom - and whilst I (hope to) account for every byte used (gotta love microcontrollers), I have seen far too many cases of C++ code that has utterly dreadful performance.

        From not apparently even *knowing* you can pass a const ref[1] to (in the most recent years) writing "good" code that "always did everything an STL string" that ended up doing megabytes (I kid you not) of memory copies[2] just to spit out a couple of kilobytes of (badly formatted) data. The code "looked ok" but they just had no idea what was going on, when copies would occur... And that was just strings! What you can do with dictionaries of maps of dynamic vectors...[4]

        It is *so* easy in C++ (and other languages, let's be fair) to not understand what is actually going on[3] and follow "the current style" - that may let lead to a usable application but I can see kernel devs just banging their heads, repeatedly, over so many occurrences of things like that.

        C may (!) be less than perfect, but you find it so much harder to hide your little memcpy fetish.

        You *can* carefully introduce C++ into a project, and get a good result without (too much) friction. But you can also, all too easily, introduce something that sweeps through the codebase in a massive search and replace, like a tub of sugar-free gummy bears, and suddenly everyone is doing C++, ready or not. At least the Rust code is segregated...

        [1] hey, you could speed up MFC code just by going through the sources with a "make parameter const ref" macro on speed dial; CTime, looking at you

        [2] including places where the compiler would happily have concatenated those constants, once, at compile time, if they'd just written them as #def'ed and then (LEADIN "this unique bit" LEADOUT)

        [3] and trying to get anyone to write an instrumented version of their code to just print out (only when the correct #if is triggered, like your debug build - you do have a debug and a release build, don't you) all the sizes of things being instantiated, copied, destroyed...

        [4] for anyone keeping count, from earlier in the week, yes, yes I have just done a rant about "bad code I have seen" in a way that is likely to discourage a newbie from trying to join in an open source project

    5. steelpillow Silver badge
      Facepalm

      Re: Veteran C and C++ programmers, however, are understandably worried...

      > ... their skills could become less relevant.

      Oh, pleeze! Leave the bombast to the zealots and social media.

      Dare I suggest that things like compact, fast and efficient compiled machine code might just lie at the back of their minds?

    6. Paul Herber Silver badge

      Re: Veteran C and C++ programmers, however, are understandably worried...

      "abacuses" ?

      What's wrong with a bunch of sticks, or rocks?

      Actually choice the use of sticks or rocks is just like C and Rust!

  4. claimed
    Facepalm

    I’ve said it before and I’ll say it again:

    Don’t like Rust? Cool

    Don’t see the value in Rust? Hubris

    “I’ve already got an immune system, antibiotics just mean swallowing giant pills and are not needed if your cells just kill bacteria like they’re supposed to”

    1. jake Silver badge

      I don't think anybody intelligent has said "I see no value in Rust".

      What people are saying is "Rust will bring more problems to the existing kernel than it will solve".

      1. claimed

        You’re probably a little more critical than I am, I think plenty of intelligent people have implied exactly that.

        “Causes more problems than it solves” is also a value statement, so I’m going to lump those comments in there too.

        That’s why I say “hubris” and not “stupidity”.

    2. dmvjjvmd

      Hubris

      I thought you meant https://hubris.oxide.computer/ and was terribly confused.

  5. jake Silver badge

    There is no doubt in my mind ...

    ... that eventually, C will no longer be used as the language for the Linux kernel. It is inevitable. Things change over time.

    However, I have many doubts that Rust is the language that will be the one to take the place of C. If Rust is truly better than C for kernel work, Rust will take over as kernel coders spread the news. That's how it works. The best, most efficient code wins. That's how it has always worked in the 60ish years I've been coding in the free software world (DECUS, early '60s).

    Rust seems to not be taken up by the vast majority of kernel coders, ergo ...

    I rather suspect that it won't be the next language du jour that takes over from C, either. Nor the next.

    What it'll take is a major paradigm shift[0] in programming that all (or at least vast majority of) kernel coders can get behind. My gut feeling is that the seeds of this new way of looking at programming haven't even been planted as yet. And no, before anyone says it, so-called "artificial intelligence" will not be a part of the answer.

    I'll bet that the bulk of the Linux kernel will probably still be coded in C long after I am gone, and probably well past my great-grand kids programming careers (assuming).

    [0] I hate that phrase just as much as you do ... but I think it actually fits there.

    1. GNU SedGawk Bronze badge

      Re: There is no doubt in my mind ...

      I personally think the tooling is getting better slowly and those improvements start to feed back into later C language revisions.

      A lot of the sharp edges about C are C89 or earlier - who says C29 has to maintain a rigid refusal to incorporate quality of life improvements, which today are implicit patterns.

    2. An_Old_Dog Silver badge

      "The best, most efficient code wins."

      True. Unless human politics and/or money and/or sex are involved.

      Welcome to the real world of vested interests and irrational humans.

    3. bombastic bob Silver badge
      Stop

      Re: There is no doubt in my mind ...

      Things change over time.

      I see this happen in my refrigerator. It's called "rotting".

      Change is often WORSE than fixing what you have. Arthuir C. Clarke's "Superiority"

      1. that one in the corner Silver badge

        Re: There is no doubt in my mind ...

        > Things change over time.

        I see this in the younglings. It's called "growing up into wonderful human beings".

      2. jake Silver badge
        Pint

        Re: There is no doubt in my mind ...

        "I see this happen in my refrigerator. It's called "rotting"."

        I see this happening in the plonk and applejack down in the caves ... it's called "aging".

        Change is often a GOOD thing. Change just for the sake of change? Maybe not so much.

        And "We must fix the memory errors! Rust can do that, therefore we must use Rust!" is no answer at all.

    4. sabroni Silver badge

      Re: There is no doubt in my mind ...

      You seem to think that all engineers are purely rational creatures who can spot an improvement and will instinctively go for the logical best choice.

      In my experience that isn't how this works. Ego gets in the way.

      When .Net launched VB was clearly superior to C#, there were loads of things that VB did that were a ball ache in C#. Even the error messaging was worse, one error in vb could be 30 in C#.

      But most people didn't like to be associated with a language called Basic, and they preferred C# because it sounded a lot like C and C++. So most developers chose the language that was missing a load of features because it sounded better. Of course nowadays C# is better than VB ever was, but at the start people made a choice based on the name, not on the capabilities of the language.

      So, the point I'm eventually making, maybe Rust just needs a cooler name?

      1. jake Silver badge

        Re: There is no doubt in my mind ...

        "When .Net launched VB was clearly superior to C#,"

        And the smart coders saw the lot as the clusterfuck that it was, and refused to have anything to do with it.

        Strangely enough, we have managed to remain gainfully employed without all the headaches that .net brought the world. Imagine that.

        1. sabroni Silver badge

          Re: all the headaches that .net brought the world

          What headaches did .Net bring to the world? A slightly better JVM?

    5. phuzz Silver badge

      Re: There is no doubt in my mind ...

      Rust seems to not be taken up by the vast majority of kernel coders, ergo ...

      It's been three years since the first Rust modules were added to the kernel, so I'd say it's still to early to say. Especially as it's mostly used for new additions to the kernel (as Linus advocates in TFA), rather than wholesale rewrites of existing modules.

      FWIW I agree with the pragmatic approach. Use Rust for new things, but if part of the kernel written in C has been working fine for years, then auditing it for memory-use bugs is likely to be a much more efficient use of peoples' time than a wholescale re-write.

      Of course this whole issue is about the parts of the kernel where maintainers writing in C have to interact with maintainers writing in Rust, and honestly I don't have a good solution for that.

      1. A.P. Veening Silver badge

        Re: There is no doubt in my mind ...

        but if part of the kernel written in C has been working fine for years, then auditing it for memory-use bugs is likely to be a much more efficient use of peoples' time than a wholescale re-write.

        But fixing those memory-use bugs may break the code. And yes, been there, done that and got the scars.

        1. phuzz Silver badge
          Devil

          Re: There is no doubt in my mind ...

          Ah yes the old "Start with 10 bugs. Fix one. Now you have 12 bugs" :) Yep we've all been there

          1. Anonymous Coward
            Anonymous Coward

            Re: There is no doubt in my mind ...

            Correction:

            "Start with 10 bugs. Fix one. Now you have 12 bugs"

            becomes

            "Start with 10 KNOWN bugs. Fix one. Now you have 12 KNOWN bugs. True total of ALL bugs = UNKNOWN"

            :)

        2. Anonymous Coward
          Anonymous Coward

          Re: There is no doubt in my mind ...

          "But fixing those memory-use bugs may break the code."

          Please help me to understand !!!

          Are you saying that the code should be left alone because it 'works' even though you know there are memory-use bugs ?

          In this case 'works' simply means that the bugs have not shown themselves in a way which is obvious or sufficiently erroneous to break something important !!!

          'It works and has done for years' can suddenly change to 'Its broken' because of a change to the code that may be required in the future, does this not mean that the code is actually 'Broken' NOW !!!

          If the code is 'Broken now' then maybe there IS a VALID reason to fix it NOW !!!

          This is not just playing with semantics, it is highlighting that there are pieces of code that are working now that may fail at a random point in time due to bugs that are as yet unknown and that the trigger to the failure is also unknown.

          Does this raise the urgency or need to fix these bugs and maybe signal that the definition of 'working' needs to be a little more precise !!!

          :)

  6. An_Old_Dog Silver badge

    Things Which Ought be Considered About ANY Add'l Language for Kernel Work

    * Subroutine/function calling convention: does the proposed language play nicely with the other programming language(s) used in the kernel?

    * Does the proposed additional language require an interpreter or LLVM? Those are significantly-large attack surfaces.

    * Is the proposed language free-as-in-freedom, truly-open-source, and free-as-in-beer? I have not forgotten the BitKeeper debacle.

    * Is the proposed language reasonably-easy to learn and understand by people lacking eidetic memories? (APL fails this test.)

    * Is the proposed language and its associated libraries stable over the long term? (C passes this test. Python does not.)

    1. Notas Badoff

      "Adding another language really shouldn't be a problem."

      Please pardon my derailing your thread, but I'm bothered by something not mentioned in any of these "It's war!" articles.

      Um, what other languages are used inside Linux. Aren't there just C and C++ ? And a smattering of assembler? And C++ is still being digested?

      Unless there were some large multiple of two languages _already_ being used in Linux, a statement like the above title just gives me instant heartburn.

      Yeah, right, boss, anything you say! (calcium pills gonna be a rare earth 'round here!)

      1. bazza Silver badge

        Re: "Adding another language really shouldn't be a problem."

        I don't think there is any C++.

        Technically speaking, one now gets ebpf in the kernel... Though no one knows what it is and what it's purpose is, because the ebpf gets added only at runtime!

    2. swm

      Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

      Is there a standard for the RUST language?

      Is there more than one compiler for the language?

      Does the gnu compiler collection support RUST?

      Will future versions of RUST invalidate old code?

      1. GNU Enjoyer
        Angel

        Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

        >Is there a standard for the RUST language?

        No.

        There is some language documentation, but there are incompatible changes to the language made every non-minor compiler version.

        >Is there more than one compiler for the language?

        No - there is only the LLVM+rustc compiler for the current language.

        There is a compiler written in C++, that can only compile a very old version of the language - so the only way to bootstrap is to bootstrap gcc and then compile that and spend days and hundreds of gigabytes of disk space building every version of the rust compiler in sequence, hoping that no version fails to compile.

        >Does the gnu compiler collection support RUST?

        No - "gccrs" has been in development for >7 years and it still cannot compile "hello world" (I don't believe it taking that long is due to incompetence on the developers part - it appears to be a deficiency of the language being too complex to write a compiler for without hundreds of people working on it for years).

        >Will future versions of RUST invalidate old code?

        As mentioned, every new version of rust invalidates something from previous versions of the language.

      2. Phil Lord

        Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

        Is there a standard:

        There is a reference implementation with reference documentation. There is a well defined processed for updating the reference implementation. Rust lacks a complete formal specification, although one is being written, for the moment it is targetting an older version.

        Is there more than one compiler

        In addition to the main reference compiler (rustc) there are three others. One is special purpose (a minimal compiler designed to allow bootstrapping of the tool chain). The other two...well read next answer.

        Does the GCC support Rust

        There are two compilers that use GCC. One is a codegen backend built on libgccjit (for static rather than Just-in-time compilation). The other is a front end for GCC, which which is aimed to be a complete implementation wrt to compilation but not wrt to soundness of the type system; the latter will be provided by the same system that rustc uses. I *think* the gccjit codegen is usuable, although not complete, while the front-end is actively developed but not usable for anything other than experimentation.

        Will future version of Rust invalidate old code.

        Rust provides a backward compatibility system (called the "edition system") which means that newer compilers can compile old code; but old compilers may not be able to build new code. The ABI is also stable so libraries can remain on an older edition than their callers (or vice versa). The exception for this is where old code depends on behaviour that is considered buggy especially wrt to soundness. Rust supports a linter system which is capable of applying fixes and is often able to upgrade code written for older editions to newer ones.

        In short there are limitations with Rust that may or may not be a problem for certain use cases, but it is overall a fairly reasonable story.

    3. LybsterRoy Silver badge

      Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

      -- (APL fails this test.) --

      As an old user of APL (second language I learnt, first as BASIC) I'm not sure I could understand the modern variants, so I'll fully endorse your comment.

      In APL's defense I will say that watching a user try and make sense of a programmer's keyboard was a delight.

    4. that one in the corner Silver badge

      Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

      > Does the proposed additional language require an interpreter or LLVM

      Can you clarify, please?

      By "LLVM" you do mean "low level virtual machine" rather than "anything coming out of the LLVM project"?

      100% agree with "no (more) VMs in the kernel" but, if that isn't your meaning, apart from the fact that the sources are (currently) only fit for consumption by GCC (AFAIK), is there anything intrinsically wrong with the LLVM languages? Which might be taken as a dig at the current Rust compiler!

    5. Phil Lord

      Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

      Function calling: Rust can call any C function. It can manipulate any data structure created in C, and can create the same data structures on the Rust side.

      It does not have an interpreter. It does have a core and standard library; I think they are only using the core library in Rust for Linux, so it is quite small. LLVM is being used during compilation; but then there is lots of work on using clang to compile which would be the same attack surface. I am unconvinced that LLVM however brings a large attack surface; just a different one from GCC.

      Yes, the language is free as in freedom and beer.

      The language is not the easiest to learn. It is certainly harder than C because the language is, I think, larger. But then it is more explicit than C with an aim to have fewer corner cases or undefined behaviour. So, arguably, it is easier to use because you have to remember less.

      Rust has a good record for backward compatibility. Newer versions of Rust compile old code well; at a binary level, new code can interface and call code compiled with older compilers. It has a written mechanism ("editions") for introducing breaking changes. The only exception is where old code depends on what is considered to be a bug, esp soundness or security bugs.

      All of this was considered and was discussed before they put it into the kernel.

      1. MonkeyJuice Bronze badge

        Re: Things Which Ought be Considered About ANY Add'l Language for Kernel Work

        I am particularly perplexed about the LLVM attack surface point. Given that Apple exclusively use it, if there were an attack surface to speak of, we would definitely have heard of it by now.

  7. DoctorNine

    Let me get the popcorn

    I feel approximately the same way about squabbles between Linux maintainers as I used to in my youth about Saturday morning TV wrestling. Very entertaining as a spectator sport, but not to be taken too seriously. First, regardless of the skill level of the various fellows dressing tights, the whole thing is referreed by a certain individual who weilds a mean rapier, and rather enjoys exhibiting his skill with it. And second, the outcome of the whole mess is going to be reasonably benign, because everyone has to keep working together for the next show. Those who get too emotional and rage quit, need to remind themselves about point one above. In my personal opinion of course.

  8. Anonymous Coward
    Anonymous Coward

    Pushing, pushing..

    The problem with dual-codebase kernel: it won't be a dual-codebase kernel. At least, not for long. It's *pushing* by people to force Rust into it now. As they make headway, there will be more of what's going on now: *PUSHING* to get it into all the other places. They're saying it loud and clear, right now: Linux needs to adopt memory-safety by default. It's an attempted coup of the Linux kernel with a new language.

    If Linux can be rust, maybe that's fine. I think there are some drawbacks, but.. maybe that's fine. Lets not pretend that that won't be the logical conclusion, though: if rust keeps going, the people who are pushing will continue to push, until everything (sudo) is replaced with rust - as opposed to "unsafe" C code.

    Others can get around it by opting to standardize the in-kernel APIs, which Linus et al. don't want to do now. It would allow both languages to coexist. Without that, you need dual-language maintainers, which is where the problem is currently, you need dual-language bindings, and you will continually -- until it's done -- have the Rust proponents saying, "But this is memory unsafe, so these bindings for C don't matter any more. This component is Rust now." That is their current goal, if you read between the lines: memory-safe Linux. If not, why push Rust into it? If so, what alternative is there but Rust?

    Embrace. Extend. E

    1. bazza Silver badge

      Re: Pushing, pushing..

      Relax. A similar thing happened when assembler programmers were confronted by increasingly competent C compilers. As soon as C compilers became good enough to beat the accomplishments of the best assembler programmers, we stopped using assembler for OSes and many other things. And absolutely no one looks back on that as a bad thing.

      The same thing is happening now with C and Rust. It's just that most of the senior Linux kernel developers aren't old enough to remember the assembler->C transition, and what a relief it was for everybody.

      1. GNU Enjoyer
        Angel

        Re: Pushing, pushing..

        What is Linux if not a kernel?

        >we stopped using assembler for OSes and many other things.

        Assembly (usually inline, but not always) is still heavily used in OS's and many other things, where C won't do and the simplicity of assembly allows it to conform to the C ABI easily.

        People only stopped writing straight machine code.

      2. Roland6 Silver badge

        Re: Pushing, pushing..

        There were ‘C’ level languages aka high-level assembly languages, around before ‘C’ took over. However, ‘C’ was probably the first that was vendor/machine independent and transportable. Plus it was taught at University, so if you wanted to use those graduates without incurring training costs, it was Unix and C for your systems.

        1. F. Frederick Skitty Silver badge

          Re: Pushing, pushing..

          I'm not aware of any pre-existing languages that were on a similar level to C - it's direct predecessors B and BCPL certainly weren't, as they only had one data type. C has often been described as a glorified assembly language, and there is definitely a lot of truth to that. At the time there was a large gulf between assembly language and higher level languages like FORTRAN, COBOL, APL, PL/I, etc.

          1. Roland6 Silver badge

            Re: Pushing, pushing..

            The major alternative systems programming language would have been BLISS from CMU and used by DEC. Others such as PL/M were proprietary and most definitely high-level assembly languages. To me C had sufficient high-level concepts yet, prior to extreme optimisation, was easily mapped into assembler and perhaps more importantly the machine code could be mapped back to the C source and thus bug fixed.

            Algol-68 whilst very powerful, debugging left a lot to be desired. I’ve not had to use RUST in anger and so don’t know how easy it is to bug fixed from machine code traces and dumps.

            Things could of been very different if CMU had written and freely distributed a portable OS written in BLISS…

      3. Doctor Syntax Silver badge

        Re: Pushing, pushing..

        Is it possible to write an OS completely in memory safe mode or must a memory safe language need an unsafe mode to get to the parts memory safe can't reach?

        1. Charlie Clark Silver badge

          Re: Pushing, pushing..

          Sort of a trick question because I think it can be done, but the hardware needs to support it as well. I remember seeing a talk at FOSDEM years ago about this, but I'm hazy on the details.

          1. Doctor Syntax Silver badge

            Re: Pushing, pushing..

            Not really a trick question. To some extend USCD P-System did it was interpreted by the P-Code interpreter whatever that might have been written in - probably assembler.

            When it gets down to the level of handing out chunks of memory I guess it's not intrinsically safe, it's just made safe by doing it carefully.

            1. A.P. Veening Silver badge

              Re: Pushing, pushing..

              The original P-Code interpreter was written in assembly, but later versions were written in higher level languages which compiled to P-Code.

  9. GNU Enjoyer
    Angel

    > to the open source project

    Sorry to break it to you, but Linux is only a kernel and despite being the poster child of "open source", the kernel, Linux isn't even completely source-available and contains proprietary software without source code, making it proprietary software.

    "El Reg" shouldn't repeat the error that Linux is "open source", no matter how popular and you shouldn't either.

    If you want a fully source-available version of Linux that is also free software, you'll need GNU Linux-libre; https://gnu.org/software/linux-libre

    If you doubt my word, just ask and I'll start posting links.

    1. diodesign (Written by Reg staff) Silver badge

      Good attempt

      7 out of 10 troll. Did make me check our wording. Linux kernel ... open source project. Yeah, we're good.

      C.

      1. GNU Enjoyer
        Angel

        Re: Good attempt

        I never GNU/Troll.

        Despite how ironic it is, it is a *fact* that Linux is not completely source-available and therefore is proprietary software, as all of it doesn't respect the 4 freedoms; https://www.gnu.org/philosophy/free-sw.html#four-freedoms

        Linux doesn't meet the 10 requirements of the OSD either; https://opensource.org/osd (see requirement 2).

        Please observe the following proprietary software without source code, disguised as arrays of numbers (this is just a few of them);

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/powerpc/platforms/8xx/micropatch.c

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/usb/dvb-usb/af9005-script.h

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/common/b2c2/flexcop-fe-tuner.c#n227

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/i2c/vgxy61.c#n115

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/iio/proximity/aw96103.c#n122

        If the license of the above is GPLv2-only, I'm sure you could find the source code for everyone (spoiler; it's not available).

        There are also a lot of questionable tables without any comments or documentation that look like they could contain software (even non-creative data that does not qualify for copyright should have its format documented, or such detail should be at least commented);

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/realtek/r8169_main.c#n3322

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/usb/r8152.c#n7824

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/wireless/realtek/rtw88/rtw8821c.c#n111

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/wireless/realtek/rtw88/rtw8822b.c#n103

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/wireless/realtek/rtw88/rtw8723d.c#n45

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/wireless/realtek/rtw89/rtw8852a.c#n68

        https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/phy/mscc/mscc_main.c#n667

        But that really is just the tip of the iceberg - there's a whole lot more proprietary stuff in Linux; https://linux-libre.fsfla.org/pub/linux-libre/releases/6.13.3-gnu/deblob-6.13 (the script functions don't differentiate between array encoded software, questionable data tables and proprietary peripheral software loading machinery, although the only way you can really tell the difference between the former two is reverse engineering anyway).

        All of that is separate to the "linux-firmware" project, which is a massive collection of proprietary software derivative works (with a handful of free peripheral software, distributed under GPLv2-only compatible terms with source code) that some Linux developers maintain, many of which are updated in lockstep with the Linux half (also, one of the peripheral software files apparently contains Linux without source code, but it seems that's alright).

        The correct wording would be; "Some maintainers of the kernel, Linux remain unconvinced that adding Rust code to the publicly developed proprietary software project is a good idea", but you're not going to write that, as that would break it to too many people wouldn't it?

        1. GNU Enjoyer
          Angel

          Re: Good attempt

          Ah yes, how could I forget the most blatant case of GPLv2-infringement, with a header that clearly stated it was proprietary software; https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=00f3696f7555d0890ae07b635e6ccbf39fd2eb3a

          It took >18 years for it to be removed, and it wasn't removed because it was proprietary, it was only removed because it was totally obsolete.

        2. diodesign Silver badge

          You're just wrong

          The files you link to are GPLv2 and/or BSD licensed. They are open source.

          Your complaint really is that hardware has to be specifically programmed to work, whether that's with magic values in IO registers or microcode, but that actually means you're upset that the designs of the electronics, whether it be a network card or a microprocessor, aren't open - fine, take that up with the hardware designers.

          The kernel code that operates the hardware is open source. It's open source. I'm done here.

          C.

          1. GNU Enjoyer
            Angel

            Re: You're just wrong

            I'm so right that you're trying to gaslight me by writing that proprietary software without source code is "open source".

            >The files you link to are GPLv2 and/or BSD licensed.

            If they are, GPLv2-only licensed, where is their source code?

            The GPLv2 very clearly states; "The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable." and clearly an undocumented object form is *not* the source code.

            If someone has slapped a copy of the GPLv2 on some proprietary object code, they are simply lying about what the license is - GPLv2 is not the license - it's some other proprietary one.

            It is trivial to release proprietary software in object form under say the 3-clause BSD and at least permission to reverse-engineer has been granted, but the software is still proprietary.

            >They are open source.

            The "open source definition disagrees with you;

            `2. Source Code

            The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.`

            >Your complaint really is that hardware has to be specifically programmed to work,

            Yes, hardware needs to have specific register values set as part of the init process and that can quite easily be free software - it's merely a matter of a free license and having comments documenting what each command does - although maliciously removing the comments from such command sequence renders such software nonfree, as then the user is denied freedom 1 and also not "open source", as section 2 is not followed.

            Some hardware needs to have peripheral software loaded up for it to work and that can quite easily be free software too - it's merely a matter of releasing the source code under the terms or the GPLv2-only or GPLv2-or-later or a compatible license.

            >take that up with the hardware designers.

            Yes, some hardware manufacturers and some Linux developers should be sued for working together to infringe copyright by infringing the GPLv2 and be forced to comply with the GPLv2 (or have their license permanently terminated and damages for the freedom lost collected), but the copyright infringement club known as the Linux Foundation ensures that will never happen.

            >The kernel code that operates the hardware is open source. It's open source.

            A lot of the kernel code that operates hardware is not source-available and therefore is not "open source" as per the "OSD".

            Repeating something that is false multiple times doesn't make it the truth.

            If you have some other definition for "open source", I'd like to read it.

            1. GNU Enjoyer
              Angel

              Re: You're just wrong

              Ah yes, today happens to the day the Linux-libre project was announced by Jeff Moe; https://web.archive.org/web/20140203134408/http://lists.autistici.org/message/20080221.002845.467ba592.en.html

              Linux had been nonfree for 12 years already by then.

              Linux-libre is now 17, and Linux is still nonfree.

              Let me guess, freedom haters will downvote this too?

              1. Anonymous Coward
                Anonymous Coward

                Re: You're just wrong

                > Let me guess, freedom haters will downvote this too?

                Dang, those guys juust hate arr freedom!

              2. jake Silver badge

                Re: You're just wrong

                "Let me guess, freedom haters will downvote this too?"

                Hah. Ol' Bill of Ockham suggests that it's just people who think you are being a prat.

            2. that one in the corner Silver badge

              Re: You're just wrong

              Tables of seemingly random numbers that get loaded into hardware registers to initialise something or other. Ok.

              > The source code for a work means the preferred form of the work for making modifications to it.

              Clearly, you have not worked alongside the same hardware devs as I have.

              They just LOVE writing their hardware setup directly into magic numbers! In a mixture of bases, just to keep us on our toes.

              I've spent ages writing macros, with comments, so that I can write out "set (named) sub-register to (meaningful enumerated value) and set (other named sub-register) to (range-checked-at-compile small integer). Only to come back the next week to find it all replaced by a single 32-bit hex constant! Yes, yes, the code is a lot shorter now, thank you for that...

              1. GNU Enjoyer
                Angel

                Re: You're just wrong

                >They just LOVE writing their hardware setup directly into magic numbers!

                Yes, akin to writing straight AMD64 machine code in hex with the AMD64 reference up (or in memory), in that case, those magic numbers alongside the hardware reference is the source code.

                Although, such technique of programming is quite foolish, as you'll come back 6 months later and wonder what idiot wrote such incomprehensible hex constant and realize it was you.

                Even then, it is often clear if there were no comments, or the comments were stripped off.

        3. that one in the corner Silver badge

          Re: Good attempt

          You might like to educate yourself over the difference between drivers for hardware and the kernel proper.

          Unless you can demonstrate that EVERY single use of Linux MUST include at least one of those files then your claim is incorrect: it would be (is!) entirely possible to build and run an entirely free & open copy of Linux, even according to your standards.

          Once past that - yes, some hardware does require opaque binary blobs to run. Some hardware requires you to go to the manufacturer's website and download a precompiled x86 executable (tough luck, foolish Arm/RISC-V/etc owner). Some require you to sign NDAs and hand over your firstborn into escrow. Nobody is even attempting to claim those are all open in any sense, let alone open source. But they run within Linux - and you ate totally at liberty to shun them with all the snoot you can muster.

          But the existence of any of those drivers does impinge upon the kernel that they would run atop.

          The onus is on you to back up your claim - get to it: demonstrate that EVERY Linux system MUST have the files you have mentionef

          1. that one in the corner Silver badge

            Re: Good attempt

            > But the existence of any of those drivers does impinge upon the kernel that they would run atop.

            Sigh, prufrede it!

            Does *NOT* impinge upon the kernel...

            But hopefully that missed word was obvious from the tone of the rest of the post.

            (Also, "are" not "ate" and so on)

          2. GNU Enjoyer
            Angel

            Re: Good attempt

            >the difference between drivers for hardware and the kernel proper.

            The "drivers for hardware" component of Linux is part of the "kernel proper" (it's under the drivers/ directory) and is not a separate part.

            Even "linux-firmware" is clearly a part of Linux, as it's inseparable from Linux if you want it to do anything.

            For some hardware there is hardware drivers available that have nothing to do with Linux, but those aren't relevant to the topic.

            >Unless you can demonstrate that EVERY single use of Linux MUST include at least one of those files then your claim is incorrec

            For every single usage of the kernel, Linux, someone MUST first download the sources (which entails downloading proprietary software if downloading a proprietary version of Linux), configure it and then compile it.

            The inclusion of proprietary software into the kernel binary entirely depends of the configuration (cut down configurations include few modules, generic configurations include nearly all of them), but it is *not* an easy task to determine if proprietary software was inserted into the binary or not (compiler and configuration file bugs could possibly result in things being inserted despite arrays not being used and/or a <config>=n option).

            The only reliable way to be sure would be to inspect the resulting binary with a decompiler and hex editor, but that would be very time consumption and have a high chance you missed something.

            That would be a waste of time anyway, as you can just build from the GNU Linux-libre sources and you can be confident the resulting binary is free (unless there was something missed while cleaning up, but the verification script is now very good and any mistakes are soon rectified).

            >it would be (is!) entirely possible to build and run an entirely free & open copy of Linux

            Assuming you somehow managed to compile a binary which didn't have any proprietary software inserted from the tainted sources, that binary would still not be free, as you could not exercise freedom 2 without the risk of permanently losing your license.

            Distribution of that binary would require including a written offer, or including the source code, but as soon as you distribute the tainted sources, your license is automatically terminated for distributing proprietary derivative works.

            Declining to provide the source code would automatically terminate your license too.

            Technically your license won't get terminated if there is a written offer and nobody ever exercises it, but if something is dependent on nobody ever exercising freedom, that is not freedom.

            >demonstrate that EVERY Linux system MUST have the files you have mentionef

            Every usage of Linux downloaded from kernel.org requires downloading those files I mentioned (I figure the build scripts would likely fail to operate if you were to hack up git to exclude downloading the files containing proprietary software also).

  10. bazza Silver badge

    From the article:

    Kees Cook, a kernel security engineer at Google and a long-time kernel contributor, for instance said: "I don't see any reason to focus on replacing existing code – doing so would actually carry a lot of risk. But writing new stuff in Rust is very effective."

    Cook's point is fair enough, though it's primarily a matter of timing. There's no need to focus on replacing existing code yet. But at some point it's probably got to happen. If Rust does become clearly "routine" within Linux, and the language in which new work is done as a matter of course, there will be pressure to re-do various parts of the Kernel in Rust.

    Fixing RedHat-style Takeovers

    There's another reason to do so. One significant problem in Linux is the combined result of the use of GPL2 and the scattered ownership of copyright. This situation is what has allowed RedHat to do what they have done (refusing to distribute source) with the maintstream kernel project seemingly powerless to respond. A defence would be if there were a single copyright holder for all the Linux source code, who could then in turn decline to give a corporate party such as RedHat access to their source code, cutting their downstream off. That can't happen at the moment because their isn't one single copyright holder for the Linux kernel code. Other projects - I'm looking specifically at GNU Radio - require copyright assignment to the project (well, to the EFF) before accepting code contributions. This does mean that they can say "no, we're not letting you have it" if they need to, or they can modify the license arbitrarily for the same reason.

    Re-writing the Linux kernel in Rust would be a unique opportunity to introduce such a copyright assignment requirement, so that the Linux community can better control who gets their source code and what they do with it. Bringing that about will take real leadership...

    1. Roland6 Silver badge

      “Re-writing” in the way you suggest, probably falls under the “modify” clause of the GPL licence of the ‘C’ code you are re-writing and thus carries the same licence.

      To change the licence you need to start afresh, just as Linus did with his “re-write” of Unix.

      I suspect this gets to the nub of the problem; the Rust zealots are lazy, in that they want others to bend them and thus change their code rather than a small,group go away for a couple of years and develop a new wholly Rust kernel, which others in the Rust community can enhance into a full OS distribution, if it is really good, people will be jumping ship.

      1. GNU Enjoyer
        Angel

        >just as Linus did with his “re-write” of Unix.

        While Linus set out to re-write Unix, he didn't actually get around to anything but a re-write of a monolithic Unix-like kernel.

        GNU's Not Unix did re-write most of Unix from scratch and then kept going mind you.

        1. Roland6 Silver badge

          Trouble is at some point you have to make a decision: style over performance.

          Whilst the RTOS I wrote did have many object oriented and modular design features, it was a bit of a monolith so as to not squander the clock cycles of the early x86 CPUs.

          I think writing it now, with modern CPUs, I might have kept more of the modular and object orientation design in the implementation, given it was targeted at millisecond applications and not microsecond applications.

      2. bazza Silver badge

        Re-writing GNU coreutils licensed with GPL in Rust as uutils licensed with MIT hasn't raised any issues. I don't see why it'd be any different with a kernel.

        https://github.com/uutils/coreutils/blob/main/LICENSE

        1. Ken Hagan Gold badge

          That, as I understand it, is a rewrite starting from a blank sheet of paper. Writing a kernel in Rust would also be OK and efforts are underway. What is problematical is adding Rust to an existing GPL project and then "taking away" the GPL at the end when none of the origonal code remains. The trouble is that all of the intermediate phases had your Rust code under a GPL licence.

          I'm not enough of a lawyer to say how hard it would be to develop under two licences. I think VLC's x264 code does that, but it isn't FOSS.

          Added: I am enough of a Reg reader to know that the kernel has not migrated to GPL3 because it would be Hard. That sounds to me like it might be a relevant observation.

          1. Handy Plough

            > I am enough of a Reg reader to know that the kernel has not migrated to GPL3 because it would be Hard.

            Linus Torvalds has publicly stated that he doesn't like the GPLv3, let alone the politics that surround it and those of the FSF. It has little to do with how theoretically hard it would be, and more to do with the opinion of Linus that the GPLv3 is a bad license.

          2. bazza Silver badge

            uutils may be a blank sheet reimplementation. However, reimplementing one piece of software in a completely different language is always a blank sheet reimplementation, regardless of whether one is looking at the original source code. Or at least, that's what I think (I am not a lawyer, seek legal advice, etc).

            The ideas expressed in the C source code are not copyright, because it's patents that protect ideas, not copyright. There's no patent licenses associated with using or reproducing the Linux kernel's functionality. All one gains from looking at the Linux C source is ideas, because you can't take one single line of that C source code and use it as is in another programming language; the Rust compiler would barf on any lines that are C syntax (there may be some syntax overlap). The end result would be a program that implements the same ideas, and maybe even the same interfaces, but that's all. And we know for sure that interfaces are free for all (see Google vs Oracle w.r.t. Java interfaces).

            Things could be a little different if one simply used a translation tool, and only a translation tool. However, no such thing exists; the output needs a lot of fix up. Plus it seems to me that, if one were reimplementing Linux in Rust one would take the opportunity to change how some of the internal interfaces are defined (i.e. in a Rust-friendly way, and not a C-friendly way). So there'd inevitably be a lot of manual work even if one did try to use an auto translator.

            Plagiarism is what it may well count as. But, I'm not sure that that's actionable in the OSS world; pretty sure GPL doesn't stop one being inspired by the licensed source code.

            Also note I'm not advocating moving away from GPL2, just moving towards there being a single copyright holder of the source code to allow better control of companies such as RedHat. GNURadio is licensed under GPL; that doesn't change in the process of copyright assignment. It does mean the EFF - as sole copyright holder - could choose to re-license GNURadio, make it proprietary, profit from it, but that's why one needs to choose the copyright assignee carefully.

  11. Anonymous Coward
    Anonymous Coward

    The problem is

    In the kernel you MUST do things Rust forbids by it's very nature.

    There's a lot of very small C/asm in the Linux kernel which accounts for most of it's performance. Can you replace that with Rust, no.

    I'm not anti-Rust but the Elephant in the room is that C is more capable than Rust so adding Rust to the kernel is really more effort, no gain.

    And at least in userspace nearly all C problems with memory safety were "not a problem" once valgrind turned up. Yes it took some effort to clean up large code bases but once they were clean keeping them clean was relatively easy.

    1. bazza Silver badge

      Re: The problem is

      Er, I’m not sure that valgrind applies to kernel testing. Applications testing, yes.

    2. Phil Lord

      Re: The problem is

      What things are there that Rust forbids? C/asm? Well, Rust can do that, yes.

      And if Rust cannot do something (as indeed, it could not do everything that was needed), it can be expanded. That has happened already.

    3. Fido

      Re: The problem is

      Since the original design of C was to eliminate as much assembly language as possible, if C were eliminated in favour of Rust, would that imply a need for more assembler in the kernel again?

      If not, could the use of unsafe in Rust be more unsafe than writing the same routine in C for those OS things where unsafe is anyway necessary?

    4. Anonymous Coward
      Anonymous Coward

      Re: adding Rust to the kernel is really more effort, no gain.

      Um, the problem is all the memory bugs.

      The gain in using Rust is a reduction in memory bugs.

      If you don't think memory bugs are a problem in linux you haven't been paying attention.

  12. Anonymous Coward
    Anonymous Coward

    What would really put the cat amongst the pigeons would be if someone submitted a patch replacing the DMA source code with a Rust alternative…

  13. MattPDev

    Rewrite it in Rust

    Genuine question. What stops a compatible Linux that is written in Rust (as far as possible)?

    Is there a Copyright issue, taking the whole design work and replicating it in another language?

    If not copyright (legal), is it ethically poor even if you credit the Linux c source in your Rust variant?

    Just a skill issue?

    Redox seem to have taken a slightly different approach but I think they risk having the problems that BSDs have without the established benefits of BSD.

    1. Anonymous Coward Silver badge
      Holmes

      Re: Rewrite it in Rust

      The effort/reward balance. You're suggesting re-writing 40 years of work by thousands of developers for the sole purpose of using a new language. Not going to achieve more performance, compatibility, whatever; just using a different language.

      Nope, not worth it.

      And you won't get enough developer involvement to complete the project before the next greatest language ever comes along.

      1. bazza Silver badge

        Re: Rewrite it in Rust

        That kinda glosses over the benefits derived from having a compiler point out one's memory misuse at build time....

        The thing that will kill off C is if the Universities start teaching Rust instead. And, why not; it's far easier to teach Rust (where the compiler does all the donkey work of pointing out your mistakes) than it is to teach C (where the lecturer actually has to mark homework).

        Projects like Linux are in a bit of a bind because if they don't embrace Rust, the project may find that the world has moved on and suddenly there's no maintainers left nor programmers willing to work in C.

        An alternative - move early - might be painful, or may become a self-fulfilling prophecy; if Linux announced "it's only Rust from now on", that'd be a pretty big hint to all us C programmers to learn Rust. Suddenly, there'd be a lot of Rust developers. On the other hand, Rust could be a bust, in which case moving early actually became moving too soon.

        Moving with the times would be ideal, because it means moving when the supply of developers is already plentiful but they're not all busy doing other things.

        Moving too late risks a project becoming deader than a dodo as the last C developer expires, lights turned off.

        There is a risk that Rust might get surpassed by another new language. If so, its inventor is going to have to hurry up!

        1. Roland6 Silver badge

          Re: Rewrite it in Rust

          >” the benefits derived from having a compiler point out one's memory misuse at build time”

          If your memory misuse is that obvious, it speaks volumes about a programmers lack of logical thinking skills.

          It amazed me decades back just how many programmers could be so stupid as to release or pass on a memory buffer and then continue to use it. So in some respects it is good these programmers will be unable to compile their Rust code until they implement the correct logic.

          >” The thing that will kill off C is if the Universities start teaching Rust instead. And, why not; it's far easier to teach Rust (where the compiler does all the donkey work of pointing out your mistakes) than it is to teach C (where the lecturer actually has to mark homework).”

          Yes Rust adoption will be helped by the universities teaching Rust, just as the adoption of C was aided by the universities.

          As for these rest of your point, you are confusing two different things: teaching a language and assessing a students comprehension of a language.

          The teaching requires appropriate lectures, introducing students to the language concept’s and reference materials. I don’t see any real difference here between Rust and C, given I would not regard either as being suitable as a students first introduction/initiation to programming languages.

          The assessment of comprehension can only be done by code reading. Regardless of which ever language we were using any work that was submitted had to include the compiled source code and the test output. The reviewer/reader could confirm they were looking at valid compiled code (ie. Part of the learning was for the student to get all the compiler errors resolved) and was thus looking for the way you had written the code and the logic of your code. Several lecturers then handing out model solutions highlighting the constructs they were expecting any one wanting a 1st should have used. I don’t see how things can be different with Rust.

          >” There is a risk that Rust might get surpassed by another new language. If so, its inventor is going to have to hurry up!”

          Given the history of Algol and the creation of Algol-68 which led directly to the creation of Pascal, I am tempted, given the aspirations for Rust, to suggest Rust has more in common with Algol-68 than Pascal…

    2. Doctor Syntax Silver badge

      Re: Rewrite it in Rust

      "Just a skill issue?"

      More of a scale issue, I'd think. If you started to write your new version now by the time you'd finished Linux would have moved on.

    3. F. Frederick Skitty Silver badge

      Re: Rewrite it in Rust

      "Genuine question. What stops a compatible Linux that is written in Rust?"

      A major disincentive is device drivers. The majority of code in the Linux kernel consists of drivers, and this broad support for hardware is a major attraction. There is no stable API for drivers in Linux, so even if you tried to leverage the availability of Linux drivers in your hypothetical Rust kernel you would either:

      1. Be forever playing catch up to API changes across all the Linux subsystems.

      2. Forking and maintaining drivers.

      A massive maintenance burden either way.

      1. Roland6 Silver badge

        Re: Rewrite it in Rust

        Your answer doesn’t really answer the question, although it does give a good reason why having achieved a Rust compatible Linux, a decision will need to be made to depreciate the C version. Ie.all new device drivers will need to be in Rust first.

        The trouble is this will probably cause upset as, like Windows 11, we can expect this Rust variant of Linux to not fully support ancient C device drivers…

    4. squizzler

      Re(dox): Rewrite it in Rust

      I think we will be enjoying these arguments over Rust in Linux long after we are reading them within our Redox desktops (or Genode, Haiku or whatever other future OS floats your boat) with full COSMIC GUI and all the apps.

      1. Roland6 Silver badge

        Re: Re(dox): Rewrite it in Rust

        I think if we are using a desktop OS written in C++ or some language other than C or Rust, we will certainly be entertained.

        Not sure about Redox, Fuchsia or other OS written in Rust. Suspect much will depend on where people sat with respect to C and Rust.

        However, we can be sure the majority of Joe Public will be happy they can still watch cat videos.

    5. bazza Silver badge

      Re: Rewrite it in Rust

      There is no reason why one cannot write a Linux compatible kernel in Rust.

      In fact, writing Linux compatible kernels is pretty common place, and has been done several times. Windows Subsystem for Linux version 1 put a Linux system call interface on top of the Windows kernel, made it binary compatible with Linux apps. For all intents and purposes, it makes Windows' kernel into a Linux kernel. QNX, Solaris, FreeBSD have all done the same thing. The fact that these kernels have their own native system call interface too is neither here nor there.

      In principal, if done perfectly, a running application would not be able to tell what the actual kernel underneath it is.

      Adding the Linux sys call interface to Windows' kernel is quite interesting. There's a fundamental difference between Windows and Unixes. In Unix/POSIX, most things crop up as a file, and the select() or epoll() function call works on any file-like device. The functions select() and epoll() are "reactors". On Windows, select() works only on network sockets. What Windows is built around is "proactors" - segments of code in their own thread that are blocked trying to do something with a device blocked on input or output. There's no such thing as reacting to events on serial ports, IPC pipes, etc. This is basically down to the internals of the OS.

      The thing is, to provide WSL1 Microsoft must have altered the Windows kernel in some quite radical ways to support the select() and epoll() function calls for pipes and devices other than sockets, but have chosen to not expose the equivalent functionality in their own win32 API.

      You can still find the consequences of this major difference. Cygwin's developers had this hilarious conversation (decades ago now) when they realised that they couldn't implement select() properly within Cygwin. So, they start up a thread per file descriptor (which is mapped on to some underlying Windows handle) and poll the device for readiness / events. That is extremely inefficient as you end up with a lot of threads busy looping (the very thing select() is supposed to do away with!). And the Boost C++ library chose to implement async I/O (which is what Windows wants you to use) instead of reactor, explicitly citing the impossibility of implementing the latter on Windows.

  14. Sykowasp

    NIH NIMBY developers

    What a bunch of babies who wouldn't last past probation in any serious software development business.

    Research (by Google) shows that older C/C++ codebases are often fairly memory safe because of years of use catching most of the flaws. It is the newer code that has memory safety issues, and that is exactly what allowing Rust based contributions will help with.

    Rust lives in a separate subfolder in the kernel. It maintains a separate interface to interact with the C based components. The C based components aren't going anywhere, they aren't getting rewritten, especially the core proven components. It isn't a good use of resource to do so.

    The size of someone's ego to get so upset at being called out by Linus that you resign from both Linux kernel maintainer and your own variant of Linux...

    1. Anonymous Coward
      Anonymous Coward

      Re: NIH NIMBY developers

      He's "quit", but Hector was still posting on the LKML under his Lina alt afterwards.

  15. Irongut Silver badge

    The adults can get back to work

    Once the social media prima donna has left the building the adults can get back to work.

    This is exactly what I expected last week when Linus made his statement about social media brigading and the prima donna quit.

  16. Rich 2 Silver badge

    Why rust?

    I have no issue with Rust - I’ve never used it so can’t comment.

    But why is it always Rust? Why not Zig? Or some other “memory safe” language. Yes, I know it has to work at a low-enough level but is Rust really the only game in town?

    1. Blazde Silver badge

      Re: Why rust?

      Zig doesn't have nearly the same memory safety guarantees. Ada's more memory safe but it's sort of like the difference between C and C++, you need to consciously do the right thing all the time to get safety. Other languages with any level of memory safety achieve that via performance-killing garbage collectors and/or bytecode/interpretation and aren't appropriate for kernel mode development for other key reasons like not supporting inline assembly or not extending the memory safety to multithreading. Swift, as one example, could work in theory but uses reference counting to achieve safety and that's prone to memory leaks caused by cycles, and a kernel is the last place you want those. You also need to drop to Objective-C to do low level stuff, and you need to manually deal with a large range of data races gotchas that Rust solves for you.

      Rust is not perfect but it's combination of kernel-appropriate safety features is just completely unique among languages of any maturity. A lot of other languages are now trying to bolt-on similar safety, often following Rust's borrow-checker & unsafe block model, but they'll generally need to break backward compatibility to achieve that if it's even possible at all.

    2. claimed

      Re: Why rust?

      Give it a go. The borrow checker is mad but once it clicks it is awesome - just fixes classes and classes of bugs and all you have to do is to write scoped code.

      Also Mozilla is a fairly big boy so it hit the ground running

      1. Baudwalk

        If they'd just...

        ...add inheritance OO as an option too, I'd probably switch my primary go-to language to Rust.

        Without inheritance, I just can't be bothered to spend the time getting properly up to speed on a replacement for my current all-rounder, C++.

    3. Roland6 Silver badge

      Re: Why rust?

      Backers potentially with deep pockets to make things happen: The five founding companies of the Rust Foundation: Amazon Web Services, Google, Huawei, Microsoft, and Mozilla.

      I suspect a key difference is the scope, Grayson Hoare had a clear vision on what Rust should be:

      “Hoare emphasized prioritizing good ideas from old languages over new development … stating "many older languages [are] better than new ones", and describing the language as "technology from the past come to save the future from itself."[17]: 8:17 [18] Early Rust developer Manish Goregaokar similarly described Rust as being based on "mostly decades-old research." “

      [ https://en.wikipedia.org/wiki/Rust_(programming_language) ]

      My understanding Zig does not have such ambitions, but is more focused on what it is trying to achieve and is more of a stepwise refinement of C/C++. I suspect Rust can (and will) learn from Zig and other attempts at languages that ai to improve the quality of executable code..

  17. Paul Herber Silver badge
    Pint

    "Count Torvalds .."

    The Linux aristocracy.

    But I'd put him a bit higher up the rankings, more of an Earl or a Duke, if not King!

    A toast to the King. Mmmm, toast.

    1. jake Silver badge

      Actually, ...

      ... he's the Benevolent Dictator For Life, as any fule no.

  18. Our_Enoch

    Why can't we just argue?

    This is software so not a war - isn't this kind of argument how things get done? It matters. It's not, directly, life or death. I feel for the less assertive but this is no holds barred isn't it? Albeit with a hopefully good conclusion in a few years.

    1. This post has been deleted by its author

    2. jake Silver badge

      Re: Why can't we just argue?

      The only people who think it's a "war" are the type who believe that HR is useful.

      To the rest of us, it's a discussion about engineering.

  19. BinkyTheMagicPaperclip Silver badge

    Not sure about Kroah-Hartman

    Rust probably does have a role to play, but to say 'The C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time'

    Oh, so just because there are committee issues (which the committee should have a self interest in fixing eventually) let's junk the entire language, and move to one which doesn't even have a formal definition or multiple compilers yet?

    It's *so very easy* to say 'hey, we should all use this thing that doesn't do exactly what we want in all circumstances yet, hasn't been exposed to as many edge cases, and will totally not be subject to all sorts of cruft and compromise, unlike every other system and language in history'.

    Use AND. Add Rust that doesn't overly burden maintainers. Look to incorporate more memory safe C and C++ interfaces. Then whichever turns out to be the best overall solution will win.

    It would probably also make a great deal of sense to move more driver logic into user mode code, my understanding is that this work is further along in Windows than in Linux.

    1. Zolko Silver badge

      Re: Not sure about Kroah-Hartman

      move more driver logic into user mode code

      I agree, but that would be opposed to what Linux and Linus has been about for ... ever. Look at the Tanenbaum -vs- Thorvalds initial debate, and later the stable ABI "argument". So that's not going to happen in Linux. If you mix this with Rust, the only conclusion you can draw is that Linux is a monolithic kernel written in C and cannot change. A Rust and/or micro kernel will not be derived from Linux and will be a clean implementation (not necessarily in Rust though).

      And probably by China

  20. Fruit and Nutcase Silver badge

    Deal with it

    Same as the init vs systemd debate.

    Without opening up that debate here...

    The big distributions have adopted system. For the likes of RedHat, it has to work. Their very existence depends on a product that works.

    Embrace and move with the times. It's not as if it's a call out to use AI to code the kernel.

    We should really have an AI Linus bot ready for when he pops his clogs - the kernel news and release missives can continue to be issued in his style

  21. TheOldFellow

    Bring back Machine Code...

    The pace of development arguments go on and up...

    I started in Machine Code in 1965, and I am NOT going to use modern shit like Algol and C...

    1. Roland6 Silver badge

      Re: Bring back Machine Code...

      Updated to paper tape or still using the front panel switches ? :)

  22. Anonymous Coward
    Anonymous Coward

    "but we are kernel developers, dammit"

    That is the most kernel dev quote I've heard in a long time.

  23. Justthefacts Silver badge

    Load-bearing borrow-checker

    Another important concern about Rust, is just how load-bearing that borrow-checker is, for supply-chain security. Compilers have bugs. They do. Very few, because of the sheer quantity of code that gets thrown at them and the diligence of a lot of software engineers, and how long they have been out there. But it does happen, and gets detected for a normal compiler bug when it observably emits incorrect assembler. Whereas a false-negative borrow-checker bug, would emit apparently-correct-assembler that simply encodes incorrect analysis, which then fails under some rare operational cases.

    What will happen when somebody spots the first false-negative borrow-checker bug in five years time? The borrow-checker is single source. In fact, the borrower- checker spec, is whatever the LLVM team thought was correct to implement, those checks and no more. Suddenly, you’ve got a massive distributed installed base of all-compiled-Rust-code, with edge-cases where the borrow-checker failed to spot the underlying bug in the source code. The underlying problem is latent bugs being coded-to-the-test. Not even the best Rust developers would pretend to write Rust that is compile-right-first-time code. They re-factor until the borrow-checker tells them it’s good, and then stop.

    On the one side, black hats overnight-run the patched Rust compiler over all known open-source code, producing a dictionary of all vulnerable code, listing the exact LOC of faults and the input data required to trigger the exposed memory-overrun vulnerability.

    And on the other side, simply using the patched Rust compiler to recompile your binaries, does not fix your security hole. The problem and problem-fix is in the Rust code of thousands of projects, not the compiler. All you get is a compilation error, not a fixed binary. You then need to wait for somebody responsible for the source code of each project, to read and fix the specific edge-case that the borrow-checker previously failed to identify. But it’s not just one Rust developer who MUST-FIX, it’s *all of them*, because there will be thousands of code-bases suddenly demanding their attention overnight. Consider 1000 different bugs of the LOG4j class and distribution, all being revealed on the same night, with fifty of them in the Linux kernel.

    Anyway, what do I know. I’m not a Rust developer, I’m sure rock-star developers like Hector Martin have got this covered.

    1. Phil Lord

      Re: Load-bearing borrow-checker

      The borrow checker and the type system are a key part of Rusts soundness guarantees. But, obviously, bugs can arise and they have done so already. It is considered a problem and once it is discovered, then it gets fixed. Some Rust code that previously compiled may not fail to compile. And where it is possible to do so, Rust normally ships a linter which picks up and fixes the problem automatically in source.

      I am not sure what your point is? If rust did not have the borrow checker or a type system, then clearly there could be no soundness bugs in either. How does that make things better?

      1. Justthefacts Silver badge

        Re: Load-bearing borrow-checker

        “It is considered a problem and once it is discovered, then it gets fixed.”

        You’re looking at this as an individual, rather than the ecosystem risk. The Rust compiler can “get fixed” as soon as you like (even same-day) but as I pointed out this does not address the ecosystem problem at all.

        What happens next, is that the vast majority of existing deployed Rust executables, are simultaneously revealed as vulnerable to attack. Using zero-day vulnerabilities that the *patched* Rust compiler very helpfully lists out in line-level detail for the black hats. How exactly are the strictly finite number of Rust developers going to simultaneously patch dozens of zero-days in every project they’ve ever worked on for last years, each, within a reasonable length of time?

        For emphasis, the roll-out of the Rust compiler patch doesn’t fix anything. It’s simply the starting-gun for every Rust developer to receive large numbers of CVE zero-days simultaneously in their inbox. And then, when they’ve fixed each CVE in source, there’s obviously still the usual flapdoodle to roll out the executables, except this time it’s being done massively parallel across dozens of framework and library inclusions too.

        Consider responsible disclosure: how do you even responsibly disclose the Rust compiler fix, when the Rust developers of thousands of security-critical core software need that fix to even know what is at risk, let alone start debugging each vulnerability. The ecosystem impact is just orders of magnitude worse than any normal compiler bug.

        1. Blazde Silver badge

          Re: Load-bearing borrow-checker

          You've got way carried away with this idea. There have been soundness issues in the borrow-checker but they're highly contrived issue no real code runs into, and that's why they've not been caught straight away. The bigger issue is bugs in unsafe code in the standard library. There's been at least one of those with little consequence, maybe more, but still orders of magnitude less than in other languages.

          20-odd years ago, MCVC's Structured Exception Handling was broken by a concurrency issue (the single-threaded kind that happens when you reason about exceptions badly). Countless Window components were broken and vulnerable and were fixed, quite tardily, but tons of other code compiled by MSVC remained broken for many years after, to this day probably. This is the apocalyptic scenario you're describing. You didn't need the magic of a Rust compiler to know they were broken, you knew they were because everything compiled with the broken C++ compiler was broken. Rust seeks to avoid these issues. Trying to pretend it somehow makes them worse is deranged.

          1. Justthefacts Silver badge

            Re: Load-bearing borrow-checker

            Sorry, you are still missing the point. My original statement was “the borrow-checker is carrying a ridiculous amount of weight”, which I stand by. Your statement is mostly a defence of “it’s mature now, so there won’t be a failure, any bugs are very minor edge-cases”. It’s a claim, maybe even valid; but we don’t know and we will never know until it happens.

            The borrow-checker carries the entire weight of the ecosystem in a single-source relatively few lines of code. Maybe it’s strong enough to bear the weight.

            But the *consequence* of a significant failure is beyond catastrophic. It’s one zero-day publically-disclosed CVE per Rust borrow-checker error-message. Synchronous on the day of LLVM patch. This is just orders of magnitude worse than the MSVC compiler bug, which weren’t exploitable CVEs, and it didn’t leave 3rd party *source* broken.

            1. Blazde Silver badge

              Re: Load-bearing borrow-checker

              I guess I'm completely missing what sort of bug you imagine? The borrow-checker is doing extra checks that other languages don't. If it fails to do that then it will only be in very obscure edge cases, because otherwise badly written code would be compiling but then crashing constantly like it does in other languages, and the result in any case cannot be worse than in other languages which don't do the checks in the first place, no?

              (The MSVC SEH thing was exploitable when chained with a stack overflow with only quite minor alignment requirements. It was played down by Microsoft and in the days before this kind of thing was jumped on by APTs but it was a really bad one because of how much software was affected. It left a lot of DLLs vulnerable, so even if you could recompile your own source you couldn't get the bug out of your software until everybody else recompiled)

              1. Justthefacts Silver badge

                Re: Load-bearing borrow-checker

                I’m sure I’m too late here, the time has passed, but nevertheless:

                “I guess I'm completely missing what sort of bug you imagine?”

                Anything that breaks the naive mental model of memory-access. Therefore I’d focus for example the fact that x86 has a strong memory model, while ARM does not. The claim that “the code has been tested, any bugs would have been spotted” largely rests on having been run on x86. So I’d pick an example like code involving atomics. And sure enough, a quick google search throws up this, which specifically talks about Rust

                https://www.nickwilcox.com/blog/arm_vs_x86_memory_model/

                Scroll down to the bit where it says….” We should be able to see the risk here. The mapping between the theoretical Rust memory model and the X86 memory model is more forgiving to programmer error. It’s possible for us to write code that is wrong with respect to the abstract memory model, but still have it produce the correct assembly code and work correctly on [x86]”

                This. This shit is what I’m talking about. And on top of that, we have Rust code that is compiled for x86 *and then run via Rosetta emulation on an ARM”. This. Does it work? Maybe. Maybe not. If it doesn’t, how much would you like to bet that Rust LLVM folks will say it is a Rosetta problem DO_NOT_FIX, while the world burns around them with hundreds of Zero Days on Apple devices.

      2. Justthefacts Silver badge

        Re: Load-bearing borrow-checker

        “If rust did not have the borrow checker or a type system, then clearly there could be no soundness bugs in either. How does that make things better?”

        [Lack of] Simultaneous disclosure of vast numbers of related soundness-bugs across the entire ecosystem. With the added benefit to black hats, that the Rust compiler *patch* is the perfect automated tool to bulk-identify targets and exact failure-mode of all publically visible open-source Rust.

        That single-sourced borrow-checker is bearing a ridiculous amount of load.

        1. Phil Lord

          Re: Load-bearing borrow-checker

          If a soundness problem is discovered in a C API, for example, where an API can be easily used in the wrong way, that then opens up an exploitable hole, then I see the same situation. The CVE would be released and any static analysis tool, for example, a C compiler, would be easily able to identify where that exploitable hole was created. I cannot see how, having a correctness tool, like the borrow checker or Rust's type system makes that worse, any than the many C based static analysis checkers that are used on Linux, all of which could themselves be buggy which might in turn allows secondary soundness bugs through.

          Of course, if a bug is discovered, there is an issue with disclosure. Once the CVE is created and made public everyone and their dog knows how to exploit the problem because the CVE describes it. That is why there is a process for non public disclosure.

          My conclusion: any language compiler and standard (or "core" for R4L) library carries a particular risk because they can potentially allow exploits in an entire ecosystem.

          Adding rust to linux clearly increases that surface; it is one of the problems that a multi-language project introduces. Again this, Rust reduces or removes entire classes of security bugs.

          I see no reason why you single out the borrow checker for as particularly worrying beyond that.

  24. ThereBePirates

    About time. The continued use of non-memory safe languages and the improvements over the years with such things should be a positive thing for the Linux community.

  25. ChromenulAI

    C compiles to Assembly.

    C++ compiles to Assembly.

    Rust compiles to Assembly.

    Python is interpreted to Assembly.

    Javascript is interpreted to Assembly.

    Java is interpreted to Assembly.

    C# is interpreted to Assembly.

    CPU executes Assembly.

    If you want to kill Rustaceans, just write Assembly.

    1. sabroni Silver badge
      Boffin

      Assembler is not machine code

      Assembler is the human interface that goes on top of machine code.

      All of those things in your list run as machine code, not assembly.

  26. joeldillon

    So, err, is 'Linux royalty' basically just Greg Kroah-Hartman here?

    1. OhForF' Silver badge

      Did you miss the article referencing Count Torvalds?

  27. trevorde Silver badge

    Best language for kernel development

    It has to be Javascript - lots of eager young devs, easy to learn, dynamic typing, proven performance and lots of frameworks. What is not to like?

  28. imanidiot Silver badge

    "allowing developers and maintainers more time to focus on the real bugs that happen"

    This does ignore the possibility of more real bugs happening because of implementing a second language in the kernel and any "grain boundaries" that now occur between the Rust parts and C parts in what was formerly a "mono-crystalline" kernel

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like