back to article Rust for Linux maintainer steps down in frustration with 'nontechnical nonsense'

Efforts to add Rust code to the Linux kernel suffered a setback last Thursday when one of the maintainers of the Rust for Linux project stepped down – citing frustration with "nontechnical nonsense." Wedson Almeida Filho, a software engineer at Microsoft who has overseen the Rust for Linux project, announced his resignation in …

  1. Harry Kiri

    Other problems

    Part of my life is involved in finding issues with the incorporation of new technology to upgrade or 'improve' things, prior to bringing into service. There are always a bunch of knock-on issues related to through-life operations to the shiny new, some of them so significant that they result in major changes to the solution or in several cases the solution is dropped. I see both sides here, yes Rust 'can' improve certain things and if this was a green-field project, fantastic. Unfortunately there are existing personnel, skill-sets, processes, legacy code-bases etc that the shiny new has to fit into nicely.

    It's very common for tech providers to go 'I've solved all of your problems!' and for the stakeholders to go... 'You do know how things work around here, right???'

    1. Anonymous Coward
      Anonymous Coward

      Re: Other problems

      "It's very common for tech providers to go 'I've solved all of your problems!' and for the stakeholders to go... 'You do know how things work around here, right???'"

      This seems to be a nice example, yes. Walks in, learns just adding rust doesn't automatically make linux fart rainbows, complains about "nontechnical nonsense" and "bikeshedding" when confronted with people seeing their maintenance workload increased for no gain to them or the wider community. This is a project for the benefit of the rustaceans. rustfapping, for short.

      And here I was hoping it was some rust CoC nonsense that had him step down. The more's the pity. But apparently at least this relentless rust promotor has no staying power when confronted with real-world issues. Thus, if you ever get stuck with a rustifyer, be sure to have a back-out plan.

      ""I truly believe the future of kernels is with memory-safe languages," Filho's note continued. "I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to it what it did to Unix.""

      So he's a true believer but no visionary. As it happens, I share neither his belief nor his lack-of-vision nor his fear. Some other kernel (like whose, you're threatening linux with kernel.dll, what?) going rust should have no bearing whatsoever on linux since we were promised left and right, up and down, that it would forever remain "optional". The problem with Unix isn't lack of memory safe languages either, and linux tries very hard not to be Unix anyway. The future of kernels in general isn't by slapping an optional one trick pony language presented as a silver bullet on it. (But I for one wouldn't mind this guy, or some other microsoftian, trying to port kernel.dll to rust. Let's see how that goes. Seems to be a better-delineated project anyway, with being a dll and all. Oh, doesn't work that way? How interesting.)

      The main problem with software projects including operating systems is forever complexity. One way to help reduce complexity is to compartimentalise. It's been long known that Unix' monolithic kernel design was getting a bit long in the tooth back in 1969 already. linux copied it, so it's still stuck with that. (Yes, I'm saying "Tanenbaum was right", even though linus probably wasn't up to that challenge and it would've risked linux going the way of gnu/herd.) But adding a language that requires a giant and complex code generator written in yet another language is not how you reduce complexity. Going on with rust will reduce maintainability and will likely cost platform coverage going forward, at the latest as soon as the pretense of optionality fades. This may well be how that pretense fades, in fact.

      So rust does exactly nothing to reduce complexity. If you want to fiddle the language, it would be a much better idea to go after its weaknesses, for example by learning from C++, rather than replacing it wholesale. Or adding a friendly neighbourhood one trick pony that isn't going to replace it wholesale, no siree, honest!

      So what this guy's saying is bald-faced lies. But we'll give him the benefit of the doubt, that with being not a visionary and all. Just a misguided true believer. And a microsoftian working on linux. That too. "There are no conflicts of interest there at all either," said Comical Ali.

      1. Anonymous Coward
        Anonymous Coward

        Re: Other problems

        Hi Ted :)

        1. Anonymous Coward
          Anonymous Coward

          Re: Other problems

          You wish, but no.

      2. FIA Silver badge

        Re: Other problems

        This seems to be a nice example, yes. Walks in, learns just adding rust doesn't automatically make linux fart rainbows, complains about "nontechnical nonsense" and "bikeshedding" when confronted with people seeing their maintenance workload increased for no gain to them or the wider community

        Wasn't the complaint that he (T'so) couldn't change stuff without breaking things and couldn't be arsed to fix the things he's broken?

        When dealing with drivers surely this kind of rigid interface is a good thing, not bad.

        Put it another way, how many people days are wasted in various companies round the world keeping their linux drivers up to date because random kernel wonk has decided to change an interface because it suits them.

        But like many things Linux, this is long past technical and now well into the evangelical, so it's just sit back and get the popcorn out I suppose. ;)

        1. Dan 55 Silver badge

          Re: Other problems

          Wasn't the complaint that he (T'so) couldn't change stuff without breaking things and couldn't be arsed to fix the things he's broken?

          Hello, I'm going to commit a bunch of Rust stuff which affects your area of the kernel. I'm not going to force you to learn Rust or prevent you from refactoring your C code, but if there is a dependency which means the Rust part doesn't compile any more, and therefore the kernel doesn't compile any more, then you'll have to learn Rust to fix it or not refactor your code. What, you don't like that? Why not?

          1. GolDDranks

            Re: Other problems

            I hope you understand that here, the Rust code depends on C code, not the other way around. Linus explicitly allowed merging Rust in if it's constrained to be only in the "leaf nodes" of the kernel dependency tree. The whole brouhaha here seems to be about T'so bitching about situation that he himself understood wrong: in the video, Wedson repeatedly tries to assure him that the Rust guys are going to do the fixing of the Rust part, so no old-school kernel devs are forced to learn Rust.

            1. Georgski

              Re: Other problems

              The talk is about abstractions over the filesystem, those don't sound like "leaf nodes", they will have many other Rusty nodes hanging off them. Are core maintainers going to be OK breaking all those nodes once there are many of them?

              I think there is stuff to talk about there and Wedson & team viewed it as very off-topic. It seems like a bad omen for later.

          2. Rapier

            Re: Other problems

            Except that's literally the opposite of what was proposed and agreed upon. The burden of maintaining coherency is on the rust developers.

            This all sounds like a not invented here problem.

        2. Orv Silver badge

          Re: Other problems

          In the Linux world, not having a fixed kernel ABI is seen as a virtue, because it discourages commercial closed-source driver development. This has been true for literally decades.

      3. RedGreen925 Bronze badge

        Re: Other problems

        "So what this guy's saying is bald-faced lies. But we'll give him the benefit of the doubt, that with being not a visionary and all. Just a misguided true believer. And a microsoftian working on linux. That too. "There are no conflicts of interest there at all either," said Comical Ali."

        There are all kinds of Microsoft plants working on Linux, more openly recently when they head home to the mother ship after a few years of their subversion having taken effect. With it entrenched like the systemd garbage they go home to get their reward with a nice high paying job. When they do not need to worry about the rubes in open source doing a god damn thing to remove their fine work for Microsoft and their attempt to change it toward their aims. We are in the extend phase of the embrace, extend and extinguish playbook of theirs. With full on support from most of these supposed freedom warriors in open source and the free software movement now fully controlled by the parasite corporations. There are very few left to defend free software as most of the distributions have bought into this new subversion hook, line and sinker. And like the fish once caught are soon to be gutted and gobbled up.

        1. Adair Silver badge

          Re: Other problems

          Evolution is a wonderful thing. When something no longer has the means to evolve fast enough to survive something else is almost always available to fill the vacuum. If 'Linux' ever disappears up it's own fundament, or is overwhelmed by external forces (malign or otherwise) the FLOSS environment will always enable a replacement. And the whole cycle will repeat itself, but that's okay, just so long as there is room for 'freedom' to express itself.

          1. TheMeerkat Silver badge

            Re: Other problems

            > the FLOSS environment will always enable a replacement

            I don’t think FLOSS these days will able to repeat what Linus and others did at the beginning. It is different environment today and higher threshold of entry.

            1. Ian Johnston Silver badge

              Re: Other problems

              Perhaps the inevitable successor to Linux won't have to run on everything from supercomputers to doorbells, and the successor will actually be a host of successors, each small, focussed, much more easily maintained and much more secure.

              1. Adair Silver badge

                Re: Other problems

                There are already options lurking in the undergrowth, just needing the right environmental conditions to push one or more of them into a much more significant place in the ecosystem. And then there is always the possibility of something needful being created de novo, because the need is urgent.

          2. nijam Silver badge

            Re: Other problems

            > When something no longer has the means to evolve fast enough...

            What claimed gains Rust produces seem to increase the effort of making incremental changes by an order of magnitude. It's less evolutionary, in other words. Or more constipationary, to provide another word.

            1. Orv Silver badge

              Re: Other problems

              Security vs agility is often a tradeoff, which is why so much of our software is hideously insecure.

        2. fg_swe Silver badge

          xBSD Waiting

          ...to replace Linux, if/when needed.

          1. Arthur the cat Silver badge

            Re: xBSD Waiting

            I believe Linus is on record as saying that if the BSDs hadn't been caught up in legal fuckwittery back in the 90s then Linux probably would never have existed. As it is I suspect Linux and the BSDs will all continue into the future, each doing different jobs that they suit.

            1. RAMChYLD Bronze badge

              Re: xBSD Waiting

              There's also illumos...

              1. wub
                Coat

                Re: xBSD Waiting

                That'll be great - I've still got my t-shirt!!

          2. BinkyTheMagicPaperclip Silver badge

            Re: xBSD Waiting

            Have you not seen the very long thread on the FreeBSD forums about sticking Rust in the base system?

            1. Anonymous Coward
              Anonymous Coward

              Re: xBSD Waiting

              No, I haven't. The only thing that surprises me about it is that the cringe still hurts me a tiny little bit. It used to be a good system, once. Meanwhile they've thrown away the je ne sais quoi that made it really good by replacing large parts with over-engineered immature and overwrought worse replacements, made good people scurry off, swallowed the CoC pill (while noticing you don't need one, but keeping it around anyway thus learning nothing), and generally being assholes to the userbase. It doesn't surprise me that the nincompoops running this shitshow think rust is the hot new thing thus needing to be in on it. This though they've kicked even pkgng out to ports (thus giving rise to a wide range of very nasty problems, all entirely avoidable), so really the only excuse for putting rust in base is to be able to rewrite parts of base in rust. Which again would be entirely premature, since the usual and natural way that would happen is to provide written-in-rust replacements from ports until well and truly mature, then make the swap. Since those don't begin to be available, this is again politicking. And yet more shame on freebsd to abandon its core values for bullshit.

              For those not in the know, "base system" is the tarball you dump on a new filesystem (along with the /etc tarball, and a kernel) to end up with a basic functional OS install. For freebsd this includes everything to build the system too (except for actual source, that's another tarball), so compilers and interpreters as needed. perl used to be in base until someone rewrote the build scripts depending on that to use something else so heavy perl users could install and use newer, fancier perls from ports without name conflicts. perl in base couldn't be upgraded as easily because it had to stay the same for at least the entire run of the current major version. Adding the very much a moving target still rust to base now? Makes zero sense. Less than zero, really.

        3. Rapier

          Re: Other problems

          That's an interesting perspective. It's also about 20 years out of date.

      4. TV nerd

        Re: Other problems

        It was also known back in 1969 that low level languages such as C and assembly were not appropriate for kernel development.

        ICL's VME/B is but one example of a kernel written in a 'memory safe' language that allowed a programmer to properly exploit the hardware, yet prevent C-style trampling of random bits of memory. VME/B development started in the 1960s and is still in use in critical systems today. I would point out (from direct experience) that when the C compiler arrived for VME, it exposed some of the awful practices of (mostly American) firms using C as their development language.

        Ultimately, and I say this as someone who has many years experience in embedded programming, development tools that make it hard to do stupid things are a great idea. Rust is not ideal, but perhaps may be the best way of breaking away from the dominance of C.

        1. Anonymous Coward
          Anonymous Coward

          Re: Other problems

          ``Rust is not ideal, but perhaps may be the best way of breaking away from the dominance of C.''

          Taking the rest of your argument at face value for the sake of argument, I still say no to this.

          rust consistently over-promises and under-delivers at horrendous cost and heaps of technical debt already, despite being not even a decade old at this point. (2015 says wikipedia) They do two things with great zest, propaganda and reveling in their CoCified community. Technically? Weaksauce. Various kinds of weaksauce on several aspects, even. So going all in on the hype risks backlash against it and by extension on this "memory safe language" idea too.

          If you're serious about memory safety, there's better options available, including for low-level, even "systems" type work. And if you can't or don't want to do complete greenfield that going rust requires, then the Herb Sutter blog linked above offers a much better idea, much less intrusive, much more deployable.

        2. Roland6 Silver badge

          Re: Other problems

          >"Ultimately, and I say this as someone who has many years experience in embedded programming, development tools that make it hard to do stupid things are a great idea. Rust is not ideal, but perhaps may be the best way of breaking away from the dominance of C.2

          Remember, there were alternative toolsets being used (for embedded and safety critical systems) before C (and Unix) took the world by storm as some form of silver bullet; it would not be a surprise if Rust went the same way as Ada...

          1. fg_swe Silver badge

            "Same Way As Ada"

            Ada is used for some of the most demanding and also most successful aeronautical control system projects.

            Here is a hint: https://www.adacore.com/company/our-customers

            1. Roland6 Silver badge

              Re: "Same Way As Ada"

              Sorry, rereading my comment, it does sound like I was dismissing Ada as a failure,

              when I was actually referring to some of the early 1980's hype which tried to portray Ada as being a more general silver bullet programming language (like Rust) and development environment (part of my degree we studied Ada and the "Stoneman" APSE).

              Obviously, one of the big limitations of Ada is that you need to be more of a Software Engineer than hacker/programmer, so naturally it would fail in the mass market compared to GWBasic and Pascal.

        3. Arthur the cat Silver badge

          Re: Other problems

          Rust is not ideal, but perhaps may be the best way of breaking away from the dominance of C.

          Have you seen Zig? Similar ideas about safety but a lot lighter in both resources needed to compile and brain power needed to understand. It can also integrate existing C & C++ code within a project.

          1. O'Reg Inalsin

            Re: Other problems

            According to your link Zig offer choice between these two:

            - Optimizations - improve speed, harm debugging, harm compile time

            - Runtime Safety Checks - harm speed, harm size, crash instead of undefined behavior

            At least I know that Rust offers the option of compile time safety checks that coexist with high speed and no runtime safety checks. Zig is just offering a new way (perhaps a better way) to do runtime checks, which was also possible decades ago. But Rust's compile time memory guarantees is a new and so far unique approach.

            1. Roland6 Silver badge

              Re: Other problems

              >At least I know that Rust offers the option of compile time safety checks

              From the description of the checking, I don't see why this can't be added to C or other language development support toolsets.

              Living-C interpreted the C code and where variables had not been declared or values didn't exist, asked the user to provide a variable declaration and value, this giving the programmer the context of usage and a nudge that their logic sequence and variable nesting might not be as intended, not just an undeclared/unresolved reference error message.

              >Rust's compile time memory guarantees

              Without a formal language definition et al to test against, there is no guarantee that the final executable output from the compiler has implemented the memory guarantees. Remember Ada has a Conformity Assessment Authority (ACAA), Rust whilst taking steps in the right direction, has a long way to go.

        4. ICL1900-G3 Silver badge

          Re: Other problems

          Ahhhh! VME... great memories... and a good argument. S3 and SFL, I miss you so much.

      5. ryokeken

        this is here is why you don't get laid

        smh

      6. Stuart Castle Silver badge

        Re: Other problems

        I follow a youtuber named Big Clive. He isn't IT. He is actually an electronics engineer (and apparently lighting engineer by trade). He usually buys unsafe devices on ebay, and makes videos where he disassembles them, and goes through how they work, and where they are unsafe. He is an old school engineer, so loves it if he manages to blow stuff up.

        A few years ago, he did a video where he covered a device that claimed that if you plug this device into your mains, it will automatically reduce your energy usage and thus save you money. IIRC, these devices are essentiallly large capacitors and *do* save money in installation where they use multiple large motors. I don't really understand how they work, but they apparently do. But your average house doesn't have either a large enough electricity system, or enough devices with a heavy load, to make a difference.

        He made the point that people have bought these devices and reported savings, but those savings are likely more a result of other things the user did. After all, if they are buying devices like this, they are probably impleting other things that save money.

        What made me think of this? Replace the money saving device with Rust. I'm not commenting positively or negatively on Rust itself. It seems like a language with good ideas, but I don't known enough about it to judge it one way or the other.

        However, I doubt it's the magic bullet people present it as. It is memory safe, which will help with some bugs, but you'll probably get a better experience if you use the opportunity to improve the Linux source code while translating it.

        1. David Hicklin Bronze badge

          Re: Other problems

          > I don't really understand how they work, but they apparently do. But your average house doesn't have either a large enough electricity system, or enough devices with a heavy load, to make a difference.

          They act as power factor correction devices. With AC power supplies inductors (motors) and capacitors have current that is out of phase with the voltage (and with each other), so adding capacitors corrects the motor inductance effect. Has to be done carefully and are usually quite big

          1. graeme leggett Silver badge

            Re: Other problems

            if I remember, they save money because of the way that electrical usage is charged by the electricity supplier

            1. robinsonb5

              Re: Other problems

              But doesn't that only really apply to large industrial energy users?

            2. Orv Silver badge

              Re: Other problems

              Yes, although it's more complicated than that. A low power factor means a device draws more amperage than it consumes, and returns the excess on the next cycle (to vastly over-simplify.) This increases resistive losses in the wiring. A home user doesn't care because their resistive losses are mostly pretty low anyway. An industrial user might care some. But the utility REALLY cares, because losses in their distribution system represent power they can't charge for. So they reward large customers for correcting their power factor. For residential neighborhoods they generally just deploy capacitor banks, since the power factor is usually inductive overall (although this is changing.)

      7. Rapier

        Re: Other problems

        This is the kind of attitude that drives people away from important open source projects. Willful misinterpretation, character assassination, and pointless FUD isn't a winning strategy my friend.

      8. Pawel44

        Re: Other problems

        What's the point of using "memory safe languge" in operating system kernel? C is superior to unsafe rust - this is what you'll be using. Linux, Linus with capial letters. You can write unix and tanenbaum without them. And no, tanenbaum was wrong and Linux have proven that many times already. There's no single serious general purpose microkernel. It's terrible idea.

    2. Springsmith
      Holmes

      Re: Other problems

      That is a measured opinion.

      To "...if this was a green-field project, fantastic. Unfortunately there are existing personnel, skill-sets, processes, legacy code-bases etc that the shiny new has to fit into nicely." I would add, that if it is a re-write it has to reach something approaching feature parity to start attracting widespread adoption. For this Linux kernel that is a lot of features.

      The whole "C vs Rust in the kernel" debate reminds me a bit of the "monolith vs the microkernel" debate. "Sure, it sounds better; We are where we are though; We genuinely wish you luck but we can't do both."

      1. Anonymous Coward
        Anonymous Coward

        Re: Other problems

        They're different beast, though. rust in the linux kernel is a political vehicle of the same order as letting poettering and crew dump a couple of buckets of incompatible lock-in for no reason on the ecosystem. How you know poettering's finest is political? The "debate" went down to "sysv init is bad and must be replaced post haste (suddenly). thus systemd is the only possible option." The premise was bunk, the logic didn't add up, and the choice was rigged. rust adoption runs on similar reasoning.

        The microkernel idea is a good one but could have done with more hardware support than it has gotten to date. Had, say, QNX not priced itself out of most markets, that might well have been different. (As it was, $35k/CPU, base OS only, in 1998 dollars.) Some of the trouble can be mitigated. Like "canned signals", which QNX does offer. Or, like instead of acting on every keystroke, collate the lot from the buffer and do the graphical scroll in one go. Which the webbrowser in the photon environment on the demo floppy does not do. Thus getting really snappy responses on some things, and stupidly slow responses on others. The thing that makes a microkernel slow is that a lot more individually slow context switches happen to make the system go if you stick to the usual way of making syscalls. But if you can manage to pack handfuls of the many many syscalls programs like to make (sometimes, well, often, a tad too many) together and get results in one go, then that goes away. As in, the *BSDs' kqueue(2). So it does require shifting programming paradigms a bit. We didn't do that because we like our 1970s Unix ways, but there's no real reason not to besides inertia.

        1. Richard 12 Silver badge

          Re: Other problems

          Packing like that causes highly variable latency, with the worst case being extremely long.

          Sure, average throughput is improved, but it's r e a l l y i rritatingwhenth i n g s vary between fast and slow.

          1. Anonymous Coward
            Anonymous Coward

            Re: Other problems

            Set a timer (comes in through kqueue too!) every, oh, tenth of a second. That seems to be about the latency human interaction demands of CLIs. Maybe make it a bit shorter for GUI programs. That forces the OS to give back the results it has ready and the answers to the rest will have to wait for the next kevent call.

            Notice that failing to pack can also easily result in undesirable situations. Like a buffer full of keystrokes where handling each (say by scrolling the webpage window up a bit) takes longer than the keyboard repeat rate. Thus you get to sit out the machine scrolling on you. That you can't interrupt either because any cancel command ends up at the back of the queue too.

            And for lots of things it doesn't really matter that much. Like startup code doing a sackful of mmap(2) calls before getting to main(), for programs run in pipes, "C10k" servers, and so on.

    3. steelpillow Silver badge

      Re: Other problems

      "Part of my life is involved in finding issues with the incorporation of new technology to upgrade or 'improve' things,"

      Same here.

      Too many people define a system as a bucketful of specific blob functions with bolt-on APIs. Replace some blob with a new one and the APIs rise up to bite your ass.

      But you can equally well define a system as a network of flow protocols with plug-in blobs transforming stuff (deep theorems of graph theory). Replace some blob with a new one and it doesn't matter how it works as long as it deals with the flow protocols.

      Different aspects of a system benefit from different viewpoints. Making a better blob is a blob-focused thing, but interacting with the system is a protocol-focused thing. You have to have stereo vision to avoid stepping in the rat shit.

  2. abend0c4 Silver badge

    Art of the possible

    I have a certain amount of sympathy for both sides in this case.

    Adding more lines of code implies more maintenance and it's already hard to find maintainers - needing specific skills from an even smaller set of potential candidates isn't going to help.

    Equally, we've probably all been in situations where NIH-syndrome has led to foot-dragging.

    However, the real problem is seems to me is that the scope, benefits and timeline of Rust for Linux are ill-defined. Indeed, they don't appear to be mentioned anywhere on the project website. It seems mostly focused at present on providing the infrastructure for writing device-drivers in Rust, but acknowledges that deprecation of duplicate drivers in Linux means there are unlikely to be Rust replacements for current drivers. The highly hardware-dependent nature of drivers makes them a good place to shake down the mechanisms by which you'd run Rust code in the kernel, but so much of the memory management is done by the Linux driver framework that you would imagine the gains from Rust's memory safety might be fairly modest - and in any case have to await hypothetical future drivers for devices for which no driver currently exists. I don't see any roadmap for introducing Rust into other parts of the kernel, or any analysis of where the benefits might most be felt.

    Of course, there is also the problem that transformational change in a project like Linux is very hard to achieve - it's mature, its stability is critical and it proceeds mostly by discrete incremental changes to is myriad components. You have to start with what you have - and that includes the people as well as the code - and juggle the various competing requirements.

    In open source, the solution to a particular problem also depends on the interests of the people working on it. In this case, the only people working seriously on memory safety in the kernel seem to be Rust developers. I can't help feeling it would be a very different conversation if a group were considering adding Rust-like features to a version of C that could over time be incorporated into the existing code.

    Like politics, it's the art of the possible and that means not only having a solution but persuading other people to adopt it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Art of the possible

      There really is no need for sympathy for the rust developers. They were welcomed with drainage guidelines but don't seem to care about them.

  3. Anonymous Coward
    Anonymous Coward

    Living this dream in my workplace right now...

    Never _ever_ tell people that the thing you're doing won't affect them, they won't have to change, they won't have to learn anything new, etc., etc. because it's the biggest fib going.

    Project A kicks off as a "technical implementation" - drop new software into an existing ecosystem without disturbing anything. Ahahahahahaha, so much naievity. So much WWI trench warfare to go nowhere for years trying to implement.

    Project B looks at the experience of project A, says not doing that again, kicks off as a "transformation" - interactions are similarly old testament biblical in nature but at least it's at the design/talky stages (and talk is cheap) rather than the trying to force a square peg into a round hole implementation.

    Will it make a difference in the end - to be decided at some future date but it's a hell of a lot easier to rewrite some words on a page than figure out several eye wateringly expensive widgets that you've bought and paid for don't fit together.

    1. Conor Stewart

      Re: Living this dream in my workplace right now...

      That is part of the argument I just don't get, how can they genuinely say that people won't need to learn rust? If rust is part of the Linux kernel and I want to work on the Linux kernel then logically I need to know rust. If I don't know rust then what happens when I come across a section written in rust that I need to alter? Then I have two options, don't alter it or learn rust.

      Sure when rust is just used for device drivers and components then it may not be essential to know just now but they seem to want it to become a major part of the kernel, so how would you be able to get away with not knowing it then?

      1. xcdb

        Re: Living this dream in my workplace right now...

        Totally agree, but I'm personally not clear on why someone would consider themselves skilled enough to confidently refactor the kernel, yet have a mindset of being unwilling to learn a new tool with different capabilities (ignoring lack of free time to do so...)

        Occasionally, I need to write something in assembly. It is a tool in the toolbox for those occasions where it is warranted, but also crazy-sharp and dangerous.

        1. HMcG

          Re: Living this dream in my workplace right now...

          I would imagine that it’s not an unwillingness to learn something new , just a judgement on the effort vs value of learning Rust in particular. Rust is new, not widely adopted, and as a kernel language unproven. Compared to applying that time and effort into improving the current C++ code, it’s not a given that learning Rust is of value ton Linux kernel dev.

          The push for incorporating Rust code into the Linux kernel seems to be as much ( if not more) about promoting Rust as it is about improving the Linux kernel.

      2. Anonymous Coward
        Anonymous Coward

        Re: Living this dream in my workplace right now...

        Isn't that what APIs are for? Looks to me like the C dev didn't want to be constrained by the discipline of having to maintain a standard interface, so simply refuses to work with anything he can't personally change.

  4. Ken Hagan Gold badge

    I don't see the ptoblem here

    "Filho's request to get information to statically encode file system interface semantics in Rust bindings"

    Well this is FOSS so the actual interface is just there for anyone to read, so presumably the sticking point is some background context that can't be expressed in C. But here's the rub: Anyone writing C code to use that interface has exactly the same problem and the same range of solutions.

    You can ask a more experienced dev, but they will have limited time to spare for teaching. You can read the source code for the implementation and figure it out yourself, but that's probably a lot of work. Thirdly, you can hope that someone else has gone down one or other of these paths and bothered to document what they learned.

    I don't see how your intended target language affects this.

    (Edit: His proposal to produce a Linux compatible kernel would present similar issues, albeit only at the user-space interface, and he doesn't seem to think this would be all that hard. So what's he actually complaining about?)

    1. Dan 55 Silver badge

      Re: I don't see the ptoblem here

      It's not the same person proposing a Linux-compatible kernel at the end of the article. That's someone else and it seems he's worked out that no true Rustacean will be happy until the whole Linux kernel is completely rewritten in Rust, so they may as well go and rewrite it in the same way as they rewrite user-space software, which is disappear for a while and come back and announce they wrote a replacement for something. It sounds much more easier for everyone than Rustaceans arriving in ships to the fabled lands of the Linux kernel, disembarking, and trying to convert all the non-believers.

      1. Ken Hagan Gold badge

        Re: I don't see the ptoblem here

        Ta for the correction. I was too hasty with my edit.

      2. Someone Else Silver badge
        Angel

        Re: I don't see the ptoblem here

        It's not the same person proposing a Linux-compatible kernel at the end of the article. That's someone else and it seems he's worked out that no true Rustacean will be happy until the whole Linux kernel is completely rewritten in Rust, [...]

        I said nothing of the sort!

    2. JoeCool Silver badge

      Re: I don't see the ptoblem here

      I highlighted exactly that quote - I don't understand what it's saying - how do you "statistically encode ... semantics".

      My take is that Tso's played that action forward several moves, and came to the issues he explicitly raised.

      1. roccamoccamrobots

        Re: I don't see the ptoblem here

        The developer with the alias of Asahi Lina has a well-written post about the underlying issue at https://vt.social/@lina/113056457969145576:

        "I think people really don't appreciate just how incomplete Linux kernel API docs are, and how Rust solves part of the problem.

        "I wrote a pile of Rust abstractions for various subsystems. For practically every single one, I had to read the C source code to understand how to use its API.

        "Simply reading the function signature and associated doc comment (if any) or explicit docs (if you're lucky and they exist) almost never fully tells you how to safely use the API. Do you need to hold a lock? Does a ref counted arg transfer the ref or does it take its own ref?

        "When a callback is called are any locks held or do you need to acquire your own? What about free callbacks, are they special? What's the intended locking order? Are there special cases where some operations might take locks in some cases but not others?

        "Is a NULL argument allowed and valid usage, or not? What happens to reference counts in the error case? Is a returned ref counted pointer already incremented, or is it an implied borrow from a reference owned by a passed argument?

        "Is the return value always a valid pointer? Can it be NULL? Or maybe it's an ERR_PTR? Maybe both? What about pointers returned via indirect arguments, are those cleared to NULL on error or left alone? Is it valid to pass a NULL ** if you don't need that return pointer?

        ...[more very interesting details]...

        "To be clear, I don't blame Linux developers for the incomplete docs. For better or worse, the Linux kernel is very complex and has to deal with a lot of subtlety. Most userspace APIs have much simpler rules you have to follow to use them safely. Kernels are hard!

        "Even experienced kernel developers get these things wrong all the time. It's not a skill issue. It's simply not possible for humans to keep all of these complex rules in their head and get them right, every single time. We are not built for that.

        "We need tooling to help us.

        "The solution is called Rust. Encode all the rules in the code and type system once, and never have to worry about them again.

        "Just like the solution to coding style arguments is to encode all the rules in an auto formatter and never have to worry about them again (hint hint! ^^)

        "And then we can stop worrying about all the low-level safety, ownership, and locking problems, and start worrying about more important things like high-level driver and subsystem design.

        "(I should note that the Rust for Linux project does in fact also enforce rustfmt for submissions, so you also don't have to worry about code formatting or have a code review complain about that if you write kernel Rust, ever! Just make rustfmt.)"

        1. Anonymous Coward
          Anonymous Coward

          Re: I don't see the ptoblem here

          Marcan has a habit of writing a bunch of code that doesn't quite interoperate with things, then complaining that those things don't change to adapt to his code and demanding other people change their work to adapt to him. Given he has a specific desire to enforce external validation of his coding peculiarities, he should not be taken as a reliable authority on the validity of kernel interfaces.

  5. silent_count

    The age old problem

    Seems like the Rustofarians are facing the classic problem of trying to convert the worshippers from The Church of the Holy C. Yeah sure, the new God promises safety from the sin of memory leaks but the worshippers of the old God are happy with their religion and aren't much interested in learning any new prayers or hymns.

    1. fg_swe Silver badge

      Re: The age old problem

      More "church of fast food ".

    2. Mage Silver badge

      Re: The age old problem

      And amazingly C++ for DOS, UNIX, Xenix etc was available by 1987. Linux Kernel 1st release was 1993.

      The problems (whichever there are today and all OSes have problems) are not all going to be magicked away by re-implementing in Rust. Issues of code quality are more about programmer quality than the language chosen.

      The DISER Lilith is a custom built workstation computer based on the Advanced Micro Devices (AMD) 2901 bit slicing processor, created by a group led by Niklaus Wirth at ETH Zurich.[2][3] The project began in 1977, and by 1984 several hundred workstations were in use.

      https://en.wikipedia.org/wiki/Lilith_(computer)

      That used Modula-2. Which done by good programmers is memory safe, can be as efficient as C or assembler, can use magic types for device drivers, supports parallelism, virtual functions etc, though hasn't the nice object syntax of C++ (but you can have objects).

      By all means use Rust, Go, Python or whatever. But re-implementing an existing code base simply creates new bugs and problems no matter how shiny your "flavour" of the day language is.

      The whole point of C was to make it easy to port UNIX and other things to different architectures and CPUs. I only dislike Javascript, Fortran and traditional BASIC (VB6 wasn't bad) more than C, but it has worked well for Linux.

      Linux not being the popular desktop is nothing to do with C vs Rust. If we ignore iOS, then Linux Kernel is #1 everywhere except desktop, and Linux / GNU OSes a success. Compare 1998 Servers with 2023 Servers. Setboxes, Routers etc.

      1. sweh

        Re: The age old problem

        "And amazingly C++ for DOS, UNIX, Xenix etc was available by 1987. Linux Kernel 1st release was 1993."

        FWIW, Linux kernel 0.11 was 1991, 0.95 (where it was going mainstream and early distros appearedl e.g. https://en.wikipedia.org/wiki/MCC_Interim_Linux#Version_0.95c+) was 1992, 0.99 was end 1992; 1.0 was 1994.

        There _was_ an attempt to allow C++ in the kernel in the past but it was pretty much abandoned 'cos the state of the compilers weren't that good and Torvalds was not a fan of the language; https://lkml.org/lkml/2004/1/20/20

    3. Orv Silver badge

      Re: The age old problem

      Also, as with many churches, the congregation is steadily aging and does not want to incorporate newbies and their strange ways.

  6. Anonymous Coward
    Anonymous Coward

    well he seems like a complete self absorbed prick.

    if it's so easy to make a rusty kernel then fuck off and do it and stop whining like a baby.

    (also as he works for M$, he was probably sabotaging linux (lets make every call X times slower for every kernal call just for fun) like that poettering twat with systemD bullshit)

    1. Irongut Silver badge

      And you sound like a moron.

      Starting with your poor reading skills which have conflated two people into one.

    2. Rapier

      Your just an offensive little man, aren't you?

  7. elsergiovolador Silver badge

    Could have been worse

    Imagine if Linux decided to incorporate Node.js to become more accessible to developers who can't comprehend C.

    1. Anonymous Coward
      Anonymous Coward

      Re: Could have been worse

      You jest, but... lua is making inroads with the *BSDs. In netbsd for drivers, in freebsd to replace the forth used to boot up the system. (And then there was the time when freebsd sported a scsi driver written in perl for a while.) No, I'm not convinced either is a good idea. But it's popular, a FOSS community buzzword, a gottahavefornootherreason, so it's almost inevitable someone stood up and did it, and core@ up and let them. I don't think of lua as small, but it's positively tiny compared to node.js, nevermind rust and the whole circus that brings to town.

      1. Brewster's Angle Grinder Silver badge

        Re: Could have been worse

        I've programmed forth, and I'm programmed lua (and I've integrated it into C++ code) and I could see why they would do that. There's no point reinventing the wheel; if you need simple scripting, lua is a good choice.

        1. Anonymous Coward
          Anonymous Coward

          Re: Could have been worse

          I don't think writing device drivers that run in kernel mode is "simple scripting", and freebsd already had a working solution but replaced it with something wildly different anyway. (In a timeframe where they replaced a good deal else on poor reasoning and with poor results that got pooh-poohed away and repeatedly forgetting to communicate crucial detail to the userbase, making it part of a pattern. Caused a bit of a walk-out too. This wasn't the only, or even the worst, poor decision on their part.) So those cases look more like bandwagoneering than anything else to me. Not to slag on other cases where lua is a good fit, mind.

      2. Ken Hagan Gold badge

        Re: Could have been worse

        In a slightly different context, there is also ebpf.

        There's definitely a case for mini-languages that can be verified, all over the world of software development. It's no great surprise that this is true for OS kernels as well. The big unanswered question is whether Rust is suitable in these contexts. Loadable modules, probably ok. Entire subsystems, perhaps too much of a stretch, but really the only way to find out is to do the legwork and produce a fully functional prototype.

      3. Anonymous Coward
        Anonymous Coward

        Re: Could have been worse

        FFS nothing at this level should be in a scripted language, even the simplest script interpreter/compiler adds way too much overhead.

        I blame lazy programmers for this shit.

        1. Orv Silver badge

          Re: Could have been worse

          I'm fine with boot code being scripted, since it's something that people need to modify and interact with. As noted, they used to use Forth for this, going way back, mostly because it was a lightweight scripting language that was available at the time.

    2. F. Frederick Skitty Silver badge

      Re: Could have been worse

      All the people using, or even just experimenting, with Rust know C and C++ well. The awareness of shortcomings with those languages is often cited as the reason they were interested in Rust.

      The problem with the current Rust in the Linux kernel problem is the lack of stable interfaces. This has been a problem for a long, long time and is a key area that distinguishes the BSD kernels from Linux since the former advocate stable interfaces.

      Since most Linux bugs are found in drivers, and are often memory related, it's sensible that the Rust folk concentrate on ways to write those in a safer language. But if the interfaces keep changing and bindings cannot be auto-generated then it's a horrible process.

      As for Ted T'so, his behaviour was pretty childish. But then again, he's notorious for breaking things and leaving others to clean up after him.

      1. F. Frederick Skitty Silver badge

        Re: Could have been worse

        That first line should have been:

        "All the people that I personally know who are using, or even just experimenting, with Rust know C and C++ well".

      2. Fruit and Nutcase Silver badge
        Mushroom

        Re: Could have been worse

        But then again, he's notorious for breaking things and leaving others to clean up after him

        Sounds like the Linux kernel workload would be eased in the longterm if Mr T'so were to retire

      3. Richard 12 Silver badge

        Re: Could have been worse

        In my experience, all the people advocating for Rust barely understand C++ at all.

        I don't know whether those people actually use Rust themselves though.

      4. Charlie Clark Silver badge

        Re: Could have been worse

        I think you raise a key point about the differences between the BSD and Linux kernels. The smaller the kernel is, the easier it is to make it well-designed and, well, safe because boring, no matter which language it's written in: this is one of the points that Tenenbaum wanted to make with Minix and the Linus, for various reasons, some of them good, decided not to go with and which has led to the sprawling Linux kernel.

        Really, since the death of the low-level restrictions imposed by the x86 architecture, some kind of microkernel for Linux would make sense, with a lot of stuff currently in the kernel moved out and a necessary separation of responsibilities imposing more discipline on components. We've seen many projects, including Haiku, be very successful by adopting this approach.

        But I also think that the proposal for a kernel written in Rust is at least a sleight of hand, if not disingenuous, because it would end up imposing exactly the same kind of discipline. As a result, I think it's unlikely that we'll see such a project being successful unless its driven by one of the bigger players, in which case my money would be on Google as it needs an OS that can run on much more varied hardware than most. Microsoft's interest in Rust is much more limited to systems administration and deployment.

  8. keithpeter Silver badge
    Windows

    New kernel seems like a good idea

    Quote from OA

    "I am no visionary but if Linux doesn't internalize this, I'm afraid some other kernel will do to [the linux kernel] what [the linux kernel] did to Unix."

    And that actually strikes me as something that would be fine. Quote from De Vault's blog post referenced in OA

    "Here’s the pitch: a motivated group of talented Rust OS developers could build a Linux-compatible kernel, from scratch, very quickly, with no need to engage in LKML politics."

    Bring it on. Show what can be done. A drop-in kernel for specific and limited workloads for (say) servers would probably be of great interest to many people and organisations, some of whom might be in a position to provide some funding.

    'Politics', i.e. the processes that humans use to work together and arrive at decisions, will have to evolve. It will be interesting to see how and what the steady-state result looks like.

    1. Anonymous Coward
      Anonymous Coward

      Seconded

      Much rather have them do a greenfield-but-compatible reimplementation than try and attach yourself for "we're in the kernel!" bragging points and not actually improving anything. But that won't happen since it carries the real risk of falling on your face by dint of failing to deliver a working kernel. This way is a short-cut to fame. Stolen fame, but that's alright when the goal is propaganda.

      It's also why the guy quits while blaming the people who can for "nontechnical nonsense" (maintenance, ie dealing with technical debt) and "bikeshedding" (perennial favourite). Not his fault he failed, see?

    2. James Anderson

      Re: New kernel seems like a good idea

      Good luck with that. The number of OSes used in the real world has been shrinking every year.

      It’s Linux in all its incarnations AWS, chrome, android etc.

      windows and BSD ( especially the MACOS variant ).

      Plus the cluster of offerings from IBM — ZOS, AIX and whatever OS400 is called these days.

      And a special mention for Wind River .. you probably have several copies running in you car. And there are a couple of instances running on the planet mars.

      Sure there are still a number of live sites using UNISYS, Solaris, HPUx VAX etc. but these are disappearing with each hardware and or software upgrade.

      Any new OS is going to face the same problem Linux faced in the 90s. No drivers for the thousands of possible devices and no support for popular applications only ten times worse.

      1. Ken Hagan Gold badge

        Re: New kernel seems like a good idea

        Veering off topic here, but I'd have put windows in the VMS camp and BSD in the UNIX camp, not together in their own group.

        The thrust is quite correct though. There are basically two OS lineages in common use and everything else is either a failed experiment or a historical line that is dying out.

        1. Richard 12 Silver badge

          Re: New kernel seems like a good idea

          Plus the hard-real-time group, like VxWorks and FreeRTOS.

          1. Mike 125

            Re: New kernel seems like a good idea

            > Plus the hard-real-time group, like VxWorks and FreeRTOS.

            Yes, deserves a mention- far more devices run an RTOS than run Linux or Microsoft combined.

            ...although definition of hard is a bit soft... 'A late answer is a wrong answer' applies in almost every computing sphere. Only the timescales change.

            There are ways for a system to guarantee milli/micro-second response times for a particular application. But typically, it can't generalise to guaranteeing all response times.

            In my world, a properly configured 'pre-emptive, priority-based' RTOS (like VxWorks and FreeRTOS) covers 99% of requirements.

            https://stackoverflow.com/questions/17308956/differences-between-hard-real-time-soft-real-time-and-firm-real-time

            1. James Anderson

              Re: New kernel seems like a good idea

              Never said they were dying. They were included in the thriving catagory. Also Wind River are the creators of VxWorks among other RTOS systems,

          2. cavac

            Re: New kernel seems like a good idea

            I would also count the whole Arduino landscape as more or less a basic OS. It supports many different hardware implementations, has bootloaders for them, sets up timers handles serial buffers and so on. Plus it comes with a ton of abstractions and loads of libraries ( basically"drivers" for all kinds of hardware).

            And it's used by millions of prople for their electronics projects.

            1. Richard 12 Silver badge

              Re: New kernel seems like a good idea

              I don't count Arduino as being an OS, I consider it to be a "bare metal" hardware abstraction layer. One could run an OS on top of the Arduino HAL, though most projects won't need to.

              On the other hand, one can certainly argue about when a HAL becomes an OS.

        2. fg_swe Silver badge

          False

          IBM mainframe and its specific OS seems to be alive and kicking in all kinds of business and finance applications. Banking, insurance, flight reservation, taxman and the like.

    3. OhForF' Silver badge
      Linux

      Linux compatible kernel

      Even if they had a 100% compatible working kernel it would take quite some manpower to just maintain compatibility while the linux kernel keeps evolving.

      I doubt the current Rust for Linux team could keep up with that maintenance - much less implement a compatible kernel from scratch.

    4. DS999 Silver badge

      That may be the best path

      Free from interference from the existing kernel crowd they claim they'd be able to port the entire thing in a reasonably short time. If its performance is basically the same, but it has the added benefit of insulation from a large class of memory related bugs, maybe it displaces the C kernel and becomes the place where most of the active development is happening. Then the old guard would have to either learn Rust or retire.

      I'm no big fan of Rust personally, but it is really stupid seeing old farts who don't want to learn new things holding back potential progress. Maybe having an all Rust kernel (with maybe a sprinkling of heavily checked C if there are a few things Rust can't do or can't do as efficiently) is what is needed to show them it is time to quit holding onto the past so tightly. I know C very well, and don't know Rust, but having Linux distros like Fedora switching to the Rust kernel would cause me to pay attention and decide learning Rust was worth the bother.

      The main reason I haven't bothered to learn Rust is that there's always some new hotness everyone is talking about, and if I'd followed the hype train I'd have learned Perl, Java, C#, Go, Python, Swift and a few other languages that would be of basically no use to me (I guess Swift would be useful if I ever got the urge to develop an iOS app I suppose)

      And hey, if it fails for whatever reason at least they can't blame anyone but themselves for that failure.

      1. Fruit and Nutcase Silver badge
        Mushroom

        Re: That may be the best path

        old farts who don't want to learn new things holding back potential progress.

        Indeed. And these old farts reinforce the adage "you can't teach an old dog new tricks", makes life difficult for the old farts who will learn new tricks as the job entails.

        Two pivotal moments in my professional life have been when I was able to identify and fix issues in low-level system code, despite being employed in both cases as a programmer for business systems in unrelated high level language. I took the time to investigate, understand the root causes, located the sources, walked the code, located the bugs and fixed the issues, in unfamiliar languages. In one case, much to the annoyance of the old fart who was the original writer of the code. In the other case, I was a young whipper snapper, but was an old fart myself here, albeit one who does not finch in learning new tricks

        1. Someone Else Silver badge
          Devil

          Re: That may be the best path

          I was a young whipper snapper, but was an old fart myself here [...]

          An old whipper-snapper? I love the concept!

    5. swm

      Rust

      I was interested in Rust but couldn't find an authoritative definition of the language. Is there a standard or at lease a current language specification draft?

      1. Reaps

        Re: Rust

        becareful handling anything you find, it was probably pulled from a rusty starfish!

      2. Roland6 Silver badge

        Re: Rust

        Short answer: No

        It's why I regard Rust as being more hype and of the moment than being for the long haul.

        1. Someone Else Silver badge

          Re: Rust

          As I thought. Call me back when there is an ISO or at least ANSI standard for it.

          Until then? Meh...

      3. Fazal Majid

        Re: Rust

        No, and it is so unstable spec-wise that only the last version of the compiler is guaranteed to compile the next, and to bootstrap it you need to go through 80-something stages, vs 3 for Go. The fact there is no other implementation, not even in GCC, tells all you need to know about the maturity of the language.

    6. Roland6 Silver badge

      Re: New kernel seems like a good idea

      >I'm afraid some other kernel will do to [the linux kernel] what [the linux kernel] did to Unix.

      The Mach kernel "did to Unix" years before Linux...

      Change is a given in IT, I'm not afraid of Linux, Windows, MacOS, OS-390 being superceded by some future upstart.

      But then I designed and wrote an RTOS back in the early 1980's which got used in safety critical applications, but those systems had all been replaced by 2000 and thus my work is dust...

      1. Fazal Majid

        Re: New kernel seems like a good idea

        And Mach failed, as in all widespread OSes based on it like macOS/Darwin or the late Tru64 glom together all the kernel components together for performance, recreating a monolithic kernel.

      2. YggOne

        Re: New kernel seems like a good idea

        Well, that was then, this is now. When Linux formed, it was during the days in when the internet was relatively new and unused vector for attacks.

        Now everything and the kitchen sink is wired to the Internet, and even simple attacks can do more damage. I suspect under pressure from government and companies, not wanting to get hacked as much, Linux will be forced to become more secure or get abandoned.

        Does Rust eliminate ALL security vulnerabilities? NO. But it does a better job of it, than other languages (e.g. no UAF, no null pointer exceptions). In evolution, you don't need to be the fastest guy, just the faster than the other one.

  9. TJ1
    Stop

    There is no static internal API/ABI

    "Filho's request to get information to statically encode file system interface semantics in Rust bindings"

    Unless I'm missing something this seems like an impedance mismatch between long-established kernel development practices where the internal API/ABI can and does change frequently, and a relative newcomer wanting to extend the reach of their new internal interfaces and to do that requiring predictable interface semantics (the bindings).

    From what I can see Ted and other's, who have been responsible for the code in the major kernel sub-systems for eons, are fundamentally opposed to suddenly having to consider unrelated domain code (Rust isn't really a sub-system since it aspires to be used across the kernel but not sure how else to describe it) that would require either:

    1. update the Rust code and bindings if they change the semantics of their sub-system API/ABI themselves - implying needing to master the Rust side to do so

    2. be delayed in implementing changes in their sub-system API/ABI waiting for a Rust-domain developer to implement changes in sync with them

    I suspect the mismatch is due to the Rust developers background possibly being in higher level library and application coding where static ABI/API are almost guaranteed.

    1. Mike 125

      Re: There is no static internal API/ABI

      > impedance mismatch

      Good description.

      That vid sums up so many software conversations over the years...

      The Rust guy seems to be trying to abstract system level concepts into types, and then assuming because it's now abstracted, the job is done, and it's now all up to the compiler. I'm overstating it, but that's how it seems.

      However, there are major differences starting at system level, which have consequences all the way down to the metal. The file system discussion demonstrates it.

      Making those interfaces generic *while maintaining the efficiency of C* was always the elephant in the room.

      BTW, I still want to see Rust succeed.

      This isn't the kind of problem Rust should be trying to solve right now. Maybe they just chose an over-complex example for the talk.

      1. SCP

        Re: There is no static internal API/ABI

        As a very distant observer (with only a passing interest in the topic) the Rust guys look to be asking for information to do a decent abstraction. The experienced kernel guys seem to have accrued a great depth of knowledge from a great deal of time working in the field. The Rust guys have two ways of getting the same knowledge - i) spend years working in the field, or ii) ask those who are knowledgable.

        From my time in a different field of software engineering I can appreciate the value of a good abstraction. I can also understand that in a system that has evolved over decades there can be a great many subtleties and 'warts' that have 'evolved' that are only understood by those steeped in the design. But this is also a form of technical debt and it seems reasonable for the new guys to want to eliminate some of that debt. I feel sure that it is not their desire to simply write C code in Rust.

        Likewise, I expect the kernel guys see all the problems/challenges and are wary of yet further challenges adding to the maintenance burden.

        Making progress can be so difficult - especially when there are different views on the best way forward.

    2. Ace2 Silver badge
      Pint

      Re: There is no static internal API/ABI

      I have but one upvote to give

  10. rgjnk
    Facepalm

    Hmm

    From my reading it looks a lot like someone evangelical about their new favourite thing getting frustrated that everyone else won't follow them to the one true faith, then handwaving their objections away as the 'non technical' views of heretics.

    It's not a personal thing, they've said they aren't interested in your toy or in taking on the extra work for you to play with it, and they're under no obligation to do so.

    A big red flag is dismissing concerns about effort & complexity, especially the part about rolling out a new kernel; if it's going to be that easy then why not roll out your proof of concept right now to dismiss the doubters? (Hint - it's not that trivial especially if you want full compatibility)

    This all feels quite familiar and it's rarely the person throwing their public leaving strop that's in the right. The rest of the world will move on perfectly happily.

    1. hittitezombie

      Re: Hmm

      Billions of C code in the kernel, and one guy comes from Microsoft and tells everyone "You're doing it wrong, do it my way". Fuck that, mate.

  11. Anonymous Coward
    Anonymous Coward

    Clear case of the person throwing a tantrum pretending other people are the problem. Why does this seem endemic to rust developers? From this to bcachefs and bcachefs-tools they are not simply opinionated but arrogantly ignoring reasonable procedures and side effects of their behaviour.

    The language has real benefits, but this doesn't help anyone.

    1. ScissorHands

      Don't conflate bcachefs with bcachefstools and both with this issue

      For starters, bcachefs has nothing to do with Rust and that kerfuffle has happened because the main developer is over-enthusiastic, isn't used to the discipline and compromise of integrating in a larger organization, and tried to push a point-update into a Release Candidate window - found a bug that needs too much intervention to solve? Live with it, put it on "Known Bugs" and try again.

      bcachefstools is Rust-related but it looks like another impedance mismatch between Debian maintainers wanting to have full lockdown of all dependencies but dependency handling in cargo being too dynamic; there may be something I'm missing, because cargo packages can be pinned; however, Debian's attitude flows from the Linux model of distro maintainers having to do all the work to keep the dependencies in check, which comes down from the general practice of dynamic linking and is becoming harder and harder, while Rust doesn't have to care because it only does static linking.

      Linux is based on two contracts: "never break user ABI" and "we'll break kernel ABI whenever we want, but if you give us your driver's source-code under GPL, we'll maintain it for you to correct for those changes and you won't have to worry about that"; the latter has been a moat around Linux, because it ensures that nobody can create an alternative and have access to Linux's breadth of drivers and kernel modules, forcing all alternatives to wither and die for lack of hardware support. Maybe losing that lockdown is what Ts'o is actually so worried about, that RedoxOS (or Fuchsia) can ever become an alternative to Linux if Linux's driver-level bindings ever become stable enough for something else to be compatible with them...

      1. Richard 12 Silver badge

        Re: Don't conflate bcachefs with bcachefstools and both with this issue

        Rust doesn't have to care because it only does static linking

        Wait, really?

        This Rust in the kernel project was dead before it started then.

        No dynamic linking means there's no possibility of plugins.

        Drivers are dynamically loaded plugins. So Rust cannot do driver interfaces.

        It also means it cannot ever handle the parts of nontrivial applications that have the highest risk of memory safety related issues.

      2. YggOne

        Re: Don't conflate bcachefs with bcachefstools and both with this issue

        > while Rust doesn't have to care because it only does static linking.

        It depends on what you mean by "Rust only does static linking"? Can the std be statically linked, not really, but that is why it doesn't have stable ABI and C ABI can't express some interesting Rust features like generics. Having stable ABI is a huge penalty, however. Evolving language becomes nigh impossible. See people quitting C/C++ committee over inability to change language because of ABI problems. [1][2]

        To quote a comment from [1] - Performance, ABI Stability, Ability to change. Pick two. Wisely.

        Can you use Rust to make a dynamic library? Yes, absolutely, positively. Can you keep it stable, yes, that's possible, but it's probably by defining a bunch of C APIs for other clients to consume.

        [1]https://cor3ntin.github.io/posts/abi/

        [2]https://www.youtube.com/watch?v=By7b19YIv8Q

  12. jsmith84

    nontechnical nonsense is really important

    My 2 cents... using an over simplistic view of the world.

    There is a technical tradition (as opposed to technical debt) that sees Linux linked to the C language, which was the best choice available at the time for reasons that don't matter (my argument is valid for any OS and any language).

    To put in simplistic words, there is a 1:1 between the way Linux/Unix and C see the world. One could say C is the only recognised/supported language in Linux.

    Adding another language (Rust, Go, Kotlin, etc) means you move from a 1:1 mapping to a 1:n mapping, where a single change anywhere means n other changes (or n-1 if one argues OS and C are the same thing, when you set C lib apart).

    Adding a single new language means a lot more work, especially, if the new language (or its libraries) keep changing its specifications all the time.

    I believe this is what is stated by the person who said "Here's the thing, you're not going to force all of us to learn Rust". It is first about learning Rust obviously, but is also unrealistic in term of maintenance cost.

    In a Client/Server architecture, it is the Server that (should) decide about the protocol to communicate with the Server, not the clients.

    Here, we have this relationship "OS <-- Programming language[i] <-- libraries[i,j]", and and update on libraries[i,j] is (perceived to be) forcing updates on OS, and all libraries[k != i,j].

    I did not watch the entire video, or followed all the Rust vs/with Linux videos available, however, it seems to me that

    1. the Linux community has already lots of work to do without Rust.

    2. a chunk of the Rust community *seems to* want to link forever Rust and Linux (arguments made above)

    3. a chunk of the Rust community *seems to* be determined to have some OS component(s) written in Rust as the ultimate validation of their language.

    4. a chunk of the Rust community does some cool stuff with the language itself, to convince help existing *users*.

    If some of the Rust community think 2 and 3 are the best way, I would respectfully suggest spending more time on 4, and a few additional points:

    4. I do not see much trying to onboard new users, and make the language simpler because it's getting worse than C++ on a daily basis (same comment with some of the std libraries).

    5. Write RustOS! Problem solved! Why trying fixing something that cannot be fixed, while you can write your a perfect OS? Many would prefer that. (There is big market with less tech debt, look at what Android accomplished)

  13. Lee D Silver badge

    Yep, that's what you want. Another GNU Hurd which will try to play catchup for literal decades (34 years in the case of Hurd) and never actually achieve any momentum.

    Or the most sensible way would be to choose a self-contained section of the kernel, with a maintainer that you can get on with, and start integrating there.

    Things like the kernel packet filtering, or the schedulers, or a particular class of device driver (e.g. we had this when wifi cards had to start loading firmware, so they worked on interfaces to proprietary firmware, which was then adopted by all kinds of drivers), or similar.

    I can't see a rewrite getting off the ground in more than a token fashion (thoughts of ReactOS and Hurd spring to mind), and I can't see anyone dictating to such a wide and diverse group of strongly-seated and well-established people quite how they should work with them. Either choose a smaller subset to become "your own" and prove that off-tree, and thus show the advantages, or break off and do your own thing entirely (but good luck with that).

    It seems that the clash of personalities would work both ways, because I'm pretty sure someone would leap to their aid as an intermediary if it was entirely one-sided and there were significant advantages. The debacle over integrating with Grsecurity/Brad Spengler, for instance, raged on for years with all kinds of people on both sides trying to help and ultimately it was abandoned because there was no way to work together even with the assistance and volunteers handling the personalities.

  14. alkasetzer

    Redox

    I may be wrong, but isn't the purpose of Redox (https://www.redox-os.org/) to be the Rust OS built on top of Rust by Rust developers?

    I understand that the scope of Redox is much more than just the scope of Rust in Linux (i.e. the entire OS vs just the Kernel), even so the metrics for redox are:

    * 9+ years in the making

    * <50 contributors to the kernel only

    So, while creating a Rust first OS (and kernel) is possible, I'm not sure that at this point there is enough developer mass to handle practical use of said OS (i.e. device maintainers). For OSes with a more limited support of hardware we already have all the BSDs (which for the most part work fine), Haiku and all the other niche OSes.

    In any case this is FOSS and people are free to do what they want with their time.

    1. bazza Silver badge

      Re: Redox

      One of the things to learn from Redox is development time. That project went from nothing to a functioning kernel, userland + graphical desktop in next to no time, a couple of years I think. It was a feat the Linux never accomplished (borrowing someone else's user land and graphical desktop). Lesson: you can do more work more quickly, with Rust.

      Redox wasn't prod ready last time I looked, but at some point it may well be. And Filho is right; other kernels that adopt compiler-guaranteed memory safety will always have an advantage over one written in C (even if that advantage is simply a matter of confidence). There will be people asking "why are we still using Linux?". For quite a while the answer will be, "because we do" - simple inertia. But, eventually, if Linux itself doesn't evolve / modernise / attract new volunteers, the answer will more likely become, "er, we shouldn't".

      Operating systems are especially vulnerable to changes in the workforce skill base. It's been a long time since C/C++ were *the* languages to learn at university, to write applications in, or even to write new OSes in. Linux is already short of willing volunteers, and the pool of C developers willing and able to work on it is inevitably in decline. I know from my own environment that fewer and fewer new recruits are turning up with any knowledge of C at all! If Microsoft and Apple continue adopting Rust for their own OSes, there's probably plenty of universities that'd take the hint and drop their C courses (what few are left) and start teaching Rust. And, as we saw with Java and now Python, universities like teaching languages that eliminate whole classes of complexity; they are easy to teach.

      I did find a report examining the demographics of the contributors to the kernel project. It's a bit old now - 2013 - (see this Bitergia page), but their conclusions then were (quoting)

      • Generations are smaller and smaller from about 100-150 to 30-50 per quarter
      • Older generations are disappearing
      • Last generations quite smaller now than they were six years ago

      I can't imagine the situation is getting any better. If folk like Ts'o want their legacy to be taken up by new blood, they may have to become the old dogs that have learned new tricks so that the new kids will play with them still.

      1. Richard 12 Silver badge

        Re: Redox

        I'm not sure you can really make that comparison.

        Going from nothing to "enough of a working prototype to demo in a friendly controlled environment" is a far smaller task than handling sufficient edge cases to actually be production ready.

        I've seen a great many projects that demo brilliantly, then vanish because getting production ready is a lot of work - and it's a lot of detail-oriented, often thankless work.

        1. bazza Silver badge

          Re: Redox

          Well, I can make that comparison. I watched Linux appear in the very early days, gather contributors, grow and mature. Redox has advanced at a much quicker rate. Admittedly it's not entirely an apples for apples comparison, but the Redox team has done a huge amount of coding that works in a pretty small amount of time.

          That reflects what other studies have reported, that programming in Rust is a lot more productive that programming in C.

          Given the shortage of volunteers to work in the Linux kernel project, developer productivity really should be a major factor in the project's technical choices going forward. Otherwise they're simply handing over effective ownership of the Linux kernel to a company that can afford to pay for developer resources and which isn't inclined to downstream their developments of it to anybody anymore. If RedHat chose to play GPL hardball with other projects like they have with RHEL, there won't be many people left running the Linux kernel project's version of the kernel.

          1. Anonymous Coward
            Anonymous Coward

            Re: Redox

            Well, writing programs on linux is a lot faster than doing the same on windows, so... apparently this doesn't matter too much in the real world. Leaving that aside:

            Niklaus Wirth did much the same thing with a language of his design: Handful of programmers, about two years, presto OS and GUI and a decent selection of applications too. Personally I'd go with a Wirth vehicle before trying rust, because it's much simpler including needing a much less complex and convoluted compiler and toolchain, requiring far fewer resources to function. The key is reducing complexity, and rust, well, it does make a great show of papering it over, but doesn't actually reduce it. And it adds quite a lot of it to make its borrow checker work. As such, the rust designers probably ment well, but execution, well, er, uhm, ah...

      2. Anonymous Coward
        Anonymous Coward

        Re: Redox

        Now I know your talking bollocks when you talk about java and python like that!

        1. bazza Silver badge

          Re: Redox

          They teach 6 year olds Python these days.

          An adjacaent topic is one of the antiquity of today's computer architectures being complicated by the need to be good at running C, and how C's memory model (all threads in a process can access all memory in the process) is what lies behind the cache faults in various CPUs leading to problems like Meltdown, Spectre. There's some CS types who are advocating ditching Symetric Multi Processing (which is largely to blame for all these difficulties with caches), and going to something more like a Transputer architecture. Programming for such architetures is different, and is often cited as the reason why not to switch as no one would get it.

          Except one CS academic decided to teach programming for such architectures to 6, 7 year old children, using Python. They picked it up really quickly, and soon had written fairly complex multiprocessing applications. The point made was that if 6, 7 year olds can learn it, so can anyone else.

          The professional world generally falls into two camps; those willing to learn and benefit from something new, and those who won't. The former are less likely to find that the world has moved on and left them behind.

          1. Roland6 Silver badge

            Re: Redox

            >"Except one CS academic decided to teach programming for such architectures to 6, 7 year old children, using Python."

            I remember my uni. lecturer, who taught declarative and non-procedural programming, warning us that we would find things difficult because our prior exposure to procedural languages will have conditioned our thinking, whereas those who's first exposure to programming was languages such as Prolog would find things easier.

          2. tekHedd

            Python for 6 year olds...

            We mostly refer to Python in my circles as "BASIC for millennials." Not necessarily /deragatory/ but... you know.

    2. Anonymous Coward
      Anonymous Coward

      Re: Redox

      Are you sure you mean "less than 50"?

      1. alkasetzer

        Re: Redox

        Yes. This was obtained by checking the number of contributors to the redox kernel repo. Other repos have different numbers of contributors, the main (i.e. redox) has around 100 contributors (excluding commits for the same person from different emails).

        The choice of just the kernel repo was to get a more or less apples to apples comparison with the linux kernel.

  15. Howard Sway Silver badge

    Why don't the old farts believe in the miraculous powers of the new shiny?

    Because the old farts have spent a few decades encountering and solving all the problems that your new shiny has yet to face.

    The main benefit of using C in the kernel is that it lets you get really close to the machine - which is a good thing when you have to deal with so much weirdly designed hardware and get the right bits moving to and from it. The code to do this can be pretty ugly, but you can at least get things working this way. The proposition that rust is better for such low level work is yet to be proven in enough concrete side by side examples for me yet.

    1. Lee D Silver badge

      Re: Why don't the old farts believe in the miraculous powers of the new shiny?

      Anything that interacts at a hardware level has to make assumptions about the content of arbitrary memory.

      In Rust, this requires "unsafe" wrappers around the code.

      Hence, anything PCI, most of the USB code, boot-time, etc. and most device drivers (which is the vast bulk of a kernel like Linux) is no better off in Rust than in C, and the rewrite would be painful just in terms of achieving little for no real gain at enormous cost and risk of mistakes.

      That this doesn't get recognised more often is embarrassing. There are HUGE projects out there where you never have to make assumptions about the contents of memory. But an OS, bootloaders and device drivers in particular, it's totally unavoidable - and that's where dangerous mistakes happen no matter what languages it's written in. But rather than rewrite some large project that doesn't require (much) unsafe code (e.g. Apache, MariaDB, etc.), Rust is trying to shoehorn itself into the kernel - possibly the worst place for it.

      1. ScissorHands

        Re: Why don't the old farts believe in the miraculous powers of the new shiny?

        Again with the "unsafe" fallacy?

        Having "unsafe" blocks locks down where serious attention must be paid - everything OUTSIDE the unsafe block is safe(r) than before. And that's not considering that safe code can (and should) armor the unsafe block from unsafe calls. So no, just because there is the possibility of breaking Rust's safety protections, and although that's basically mandatory for interaction with hardware, it doesn't render it useless.

        1. Ace2 Silver badge

          Re: Why don't the old farts believe in the miraculous powers of the new shiny?

          What parts of my net or rdma drivers *wouldn’t* be in unsafe blocks?

        2. Claptrap314 Silver badge

          Re: Why don't the old farts believe in the miraculous powers of the new shiny?

          I'm sorry, but this argument is childish. Pass a prelim or two in mathematics, and then get back to me.

          Code either does what it is supposed to 100% of the time, or it have holes. The difference between "works" and "does not work" is, in many places, a single bit. When you change a single bit in one place, there is no way to ensure via compiler that a problem won't descend from that bit at some distant place.

        3. Richard 12 Silver badge

          Re: Why don't the old farts believe in the miraculous powers of the new shiny?

          The point is that in the kernel, the vast majority of it is unsafe.

          Thus what would actually happen is that some large proportion of it gets rewritten (introducing errors) and wrapped in "unsafe" (gaining nothing).

          Thus pushback, as there is great risk for very little proposed benefit.

          If you want to prove the language works well for low-level, write firmware for hardware devices.

          And if you want the language to be generally adopted, write applications and bindings to popular GUI frameworks.

          And if you ever want it to replace C, standardise an ABI.

          1. ScissorHands

            Re: Why don't the old farts believe in the miraculous powers of the new shiny?

            "in the kernel, the vast majority of it is unsafe."

            yes, because it's C. It's ALL unsafe - even the logic that handles the internal data structures, which doesn't need to be unsafe. And the internal data structures is what makes the kernel.

            "(gaining nothing). Thus pushback, as there is great risk for very little proposed benefit."

            take your blinders off. Logic errors are easier to catch than CVEs. Errors in defined behaviour are catchable with tests, errors in undefined behaviour _might_ be.

            "write firmware for hardware devices"

            Oxide Computer Company - AMD Zen hardware initialization without a BIOS, IPMI control without actual IPMI, an embedded kernel, I could go on: https://github.com/oxidecomputer

            "write applications and bindings to popular GUI frameworks"

            Right, that's what's preventing people from using C#, not having proper bindings to Qt. Why not walk and chew gum at the same time? Unlike other memory-safety languages like Java or C#, Rust can be a used as a systems language - which is much harder, but where the gains would be bigger, and it's not something other memory-safe languages can do.

            "standardise an ABI"

            why aren't you in charge of the Linux kernel? what possible reason could the Linux kernel developers have for not standardising their (driver) ABI?

        4. Lee D Silver badge

          Re: Why don't the old farts believe in the miraculous powers of the new shiny?

          "unsafe" use can literally compromise the neighbouring "safe" portions because the safe guarantees cannot guard against code that's not executed in a safe context.

          So your isolated "unsafe" portion can actually impact the safety of your "perfect little isolated circle" of safe Rust code.

          1. SCP

            Re: Why don't the old farts believe in the miraculous powers of the new shiny?

            Yes,this is why ScissorHands wrote that unsafe identifies the areas of code "where serious attention must be paid ".

            It is telling you explicitly that the language/compiler is not providing any protection here so you need to be extra careful in your designs, reviews.and testing. You might also want to consider explicit runtime checks at the exit of the 'unsafe' block.

            1. Lee D Silver badge

              Re: Why don't the old farts believe in the miraculous powers of the new shiny?

              Which part of "it destroys guarantees of surrounding 'safe'" code do you not understand?

              You have to pay attention to random pieces of code, which may be far out of scope of the unsafe region, and while someone is bashing on what they think is "safe" code, you're pulling the rug from under them without them knowing.

              It's literally no better than C when dealing with anything in, near or even ever referenced by unsafe code. Of which there would be a LOT for hardware access. And "near" has a very loose definition because with Rust you have absolutely no idea where things are actually being stored in memory anyway.

              It identify the code that's going to mess EVERYTHING else up, silently, and cause so many problems that you'll have to audit the entire thing anyway.

              1. SCP

                Re: Why don't the old farts believe in the miraculous powers of the new shiny?

                Well I have my understanding of it, which seemingly differs from your understanding of it.

                In my experience (working on safety critical code) the use of unsafe means that the compiler does not apply certain language rules in this region. You should only use it where you need to do some 'clever' fettling to achieve some aim that would otherwise be impractical to achieve. Once you have finished your fettling you usually want to get back to the compiler providing the protections the language offers.

                For high reliability you also need to verify that your fettling has not screwed things up for everything else. This might be done by including some run-time checks that explicitly verify what you have done, it might include formal or rigorous proof/argument about the correctness of your fettling supported by extended testing, it might include some additional review activities. This additional work provides the guarantees. The benefit is that this extra work is confined to the 'unsafe' region - limiting the amount of work involved.

                If you choose not to do this work then yes there is no worthwhile guarantee. Your code might still be better off for having used the language because the opportunities for you to screw things up have been greatly reduced - but you don't have the same strength of guarantee as you would have had if you had been able to avoid using 'unsafe'.

                And yes, if you are using 'unsafe' to do some large scale hacking that you cannot adequately verify then you are buggered. But most people interested in high reliability systems don't do that and value the fact that they can reduce the risks of making simple errors which require additional processes to detect and eliminate.

                Of course, things like memory safety are only one aspect of code being fit for purpose - there are many ways you can screw things up and you should have a verification and validation process commensurate with the requirements of the system.

  16. herman Silver badge

    Stick a fork in it

    Linux is Free. The Rust buckets can fork it.

  17. DrkShadow

    Others chose to stay awyay,

    > Those who can't or don't want to be involved are obviously welcome to stay away. This does not (and did not) bother me at all."

    Ok. Lets reconcile this with,

    > The video depicts resistance to Filho's request to get information to statically encode file system interface semantics in Rust bindings

    So -- the person here is trying to get information that isn't necessarily static so that they can update the kernel code, then merge that code into mainline.

    Then,

    > "[T]o reiterate, no one is trying [to] force anyone else to learn Rust nor prevent refactorings of C code."

    So, what does this person think a kernel maintainer _does_? This person wants to create kernel code, is complaining presumably because it isn't being mainlined, and blah. So suppose he wins. The rust code is mainlined. A thing changes. Whose job is it to update the Rust code when it breaks? Oh! The maintainer's! -- who now has to either dump the rust code for _fix it_. I.e., the maintainers will be expected to learn rust. Or, if no one else, then the person making the change in C will also have to make the change in Rust -- and if not that person, either the patch is rejected (refactoring is prevented), or the maintainer has to do it. I mean, the maintainer guarantees that piece of the kernel code -- so really it's the maintainer.

    The maintainer does what the person here said is just fine, and "stayed away". That leads to the subject of this article throwing their hands up and quitting. Good. I hate people like this.

    For a long time, there was an external MM branch of the kernel. There can be an external Rust branch of the kernel. When Rust is shown to be so-much-better, or have more development than the mainline branch, and all the necessary, on-going support, it'll probably be incorporated. Until then, it seems people are "obviously welcome to stay away". Probably, this won't happen unless the whole kernel is rewritten in rust.

    > "This does not (and did not) bother me at all."

    Clearly this is false. The person gave up under the work and effort they were trying to shunt unto others. But hey, make yourself out to be altruistic, and simply explain how everyone else is in the wrong.

    ---

    They complain that people who work on the kernel yell and shout, but honestly, passive-aggressivism like this is toxic. (This is a small amount, but it's only one sample. who knows about these "bike-shed" scenarios.)

    1. OlegLalexandrov

      Re: Others chose to stay awyay,

      I am sympathetic with the old school developers. If Rust becomes part of the core Linux logic, rather than some drivers, C developers won't be able to do their job properly, and I don't think Rust programmers can pick up the slack.

    2. Abominator

      Re: Others chose to stay awyay,

      Well said.

    3. Fazal Majid

      Re: Others chose to stay awyay,

      The Rust community is rife with such toxicity:

      https://fasterthanli.me/articles/the-rustconf-keynote-fiasco-explained

  18. fg_swe Silver badge

    Looking Into The Future Through The Rear View Mirror

    https://en.wikipedia.org/wiki/Burroughs_Large_Systems

    https://stackoverflow.com/questions/1463321/was-algol-ever-used-for-mainstream-programming

    https://en.wikipedia.org/wiki/ICL_VME

    https://en.wikipedia.org/wiki/Oberon_(operating_system)

    https://en.wikipedia.org/wiki/Singularity_%28operating_system%29

    1. DrkShadow

      Re: Looking Into The Future Through The Rear View Mirror

      That's a lot of links you've got there.

      Give a sumary, please. Especially the relevant points.

      1. fg_swe Silver badge

        Summary

        It is possible to build OS kernels in memory safe languages.

        1. Richard 12 Silver badge

          Re: Summary

          It's possible to build a house out of Lego.

          That doesn't necessarily make it a good idea.

  19. Anonymous Coward
    Anonymous Coward

    Vroomfondel and Majikthise are unfortunatly not dead!

    "We demand rigidly defined areas of doubt and uncertainty!"

  20. Blackjack Silver badge

    [I think if the amount of effort being put into Rust-for-Linux were applied to a new Linux-compatible OS we could have something production-ready for some use cases within a few years]

    The problem would be that it would be compatible with a Linux from the era the project started.

    People really tends to underestimate how much stuff the Kernel just makes it run.

    There us no LTR for the Linux kernel, so you have people work five years to make a Rust Linux compatible kernel and said kernel is five years behind. Sure some people would find it useful but to me it just seems like a way to sideline Rust in the Kernel.

  21. mevets

    Yesterday, all my troubles seemed....

    The asm + C wasn't the only version of early UNIX.

    There was a version, TUNIS, successfully implemented in Euclid in 1982 [ ref: https://dl.acm.org/doi/10.1145/1041466.1041467 ].

    Euclid, for a developer, was like Rust with really sharp edges.

    People joked that every program written in Euclid worked, because changing the definition of *worked* was easier than getting a Euclid program to compile.

    That is probably a bit harsh, but Euclid had very strong guarantees, and a lack of expressiveness that makes it look like Rust-from-another-mother.

    Oddly, Rust prefers a horrifyingly difficult shorthand of punctuation in an era where baud rates don't matter; whereas Euclid preferred elucidation in a completely different era.

    The revenge-fantasy of a Don Trump is not to be aspired to. It is an embarrassment for the whole human race.

  22. yetanotheraoc Silver badge

    To paraphrase the old joke

    Solving the technical part of the problem requires 90% of the effort. Solving the non-technical part requires the other 90%.

  23. Tron Silver badge

    Modest proposal.

    Time has passed. We need to look back at what worked, what didn't, and what we now need, and start again from scratch. The tech industry pay for it and ensure it has a professional structure: wages and workplace codes. Nobody owns it. Everyone benefits from it.

  24. This post has been deleted by its author

  25. mili

    Next stage of the war

    It is obvious that Rust has the higher ground in terms of memory safety and Ted is putting it bluntly, he is still not ready to surrender and learn Rust. In essence this means, that there will be blood until either somebody changes the coordinates or one side wins over. But just from my perspective. Rust, as it appears today, is too difficult to master for a big number of people. Given the complexity of the kernel and the language this will become a playing field for very few and will die for something more approachable to the masses.

    1. Abominator

      Re: Next stage of the war

      It's a shit show of a language. They could have made something much more familiar, but no.

      Anyway, Rust is so last year, like Go before it.

      Zig is the new kid in town. I much prefer it to Rust, but C still has a place in my heart.

      https://ziglang.org

  26. hittitezombie

    Fuckity bye, you're a part of Microsoft's "Embrace, Extend, Extinguish" mission.

    1. Abominator
  27. Mrs Spartacus
    Happy

    Interesting sociology experiment

    As a project manager of many years, I've had my fair share of team management issues, but always in a corporate structure.

    I have no experience of volunteer projects such as this, and the mind boggles at how anyone gets anything done at all with so many different brains all beavering away in (often) different directions. I take my hat off to those who make it actually happen, I'm not sure I'd have the patience.

    Such a community based project environment must have been studied by a sociologist looking for a good MSc or Doctoral subject, I wonder if their sanity survived the challenge...

    Is it obvious I'm already working towards semi retirement?

  28. Jeffrey Tillinghast

    Why are the Rust people pushing so hard to have Rust in the Linux kernel? Why not work on an O/S kernel in Rust from scratch? The fact that they do not seem to be enthusiastic about it is strange. This aside, what were they expecting from old-time Linux kernel developers as a result of their attempt to push Rust into the Linux kernel? Hugs and kisses? Perhaps they should learn some basic human psychology.

    1. In total, your posts have been upvoted 1337 times

      The Linux source is quite idiomatic, thanks to the zealous peer-review process. The documentation is mostly pretty good. Feels like the whole thing would be ripe for at least partially automated translation to another language, be it Rust or something else.

  29. ShortStuff
    Linux

    Build Your Own -- RUSTIX

    Come on all you RUST developers, develop your own version of Linux. Make it 100% RUST and prove your point. Let's call it RUSTIX

    1. Abominator

      Re: Build Your Own -- RUSTIX

      There is, called Redox. Nobody uses it.

  30. Abominator

    Maybe Rust developers just go write their own OS. See if people use it or not.

    There is RedoxOS. How's that going?

    From what I can tell there is next to no interest. So doing a bit of a cuckoo.

    The amazing lack of self awareness. Hey, we are nice guys and just here to replace everything you are doing if we were allowed!

  31. alfmel

    I agree with with many of the things that have been said here, especially about the added burden of maintaining data structures in multiple languages. I also see two human behaviors (as opposed to technical hurdles) that muddy the discussion.

    The first is language preclusion: when a developer refuses to work in languages other than his/her preferred language. I can understand a developer becoming incredibly proficient in one language. Unfortunately superb understanding of a language is often accompanied by elitism and snobbery. Ted Tso's comment about "forc[ing] all of us to learn Rust" strikes me as a bit elitist, taking credibility away from the technical argument of increased maintenance burdens.

    The second is attachment to our work. When you pour your heart and soul into producing something, it is natural to feel proud and protective of that work. Early in my career I would become so protective I impeded progress and damaged my reputation with other developers and management. I eventually learned to let go and lead, rather than mandate how things should be. I have unfortunately seen many cases both in my work and in my Open Source contributions where maintainers become overprotective and impede progress. I'm not suggesting Rust is the right direction the kernel should take. I am merely restating that being over-protective is one of the reasons disruptive innovation often comes from the outside.

  32. Mythical Ham-Lunch

    I can admire the goals of Rust but I find it very telling that Microsoft pushes it so hard. The selling points of the language seem to be, at a gross level, that for a given final system state it takes comparatively fewer tokens than C to represent the instructions to achieve that state, and that the parts of the system state that are relevant to the user are described by tokens which, statistically, are likely to be close to one another in the source code.

    In other words, because of the strong statistical relationship between code blocks and user-observed behaviour, it's amenable for generation by LLMs because the LLM input (the program specification) will have a very tight relationship with the LLM output/compiler input (the source code). Any given line of specification is more likely to correlate to a series of characters in the LLM data model which has been associated by someone else with the same specification already.

    A particularly paranoid person might even wonder if the push to get Rust into the kernel is so that an unpaid developer community will generate a whole bunch of Rusty systems code and associated documentation which can then be ingested by Copilot and barfed back out for Microsoft devs. A paranoid person. Might think that.

    1. StrangerHereMyself Silver badge

      The advantage for Microsoft is that Rust requires essentially no maintenance. If it compiles and works as expected then it's guaranteed to be free of memory snafu's and vulnerabilities (save for logical ones, of course).

      The C code base requires continuous scrutiny, especially when changes are made. This costs a lot of money and more importantly developer capacity. Capacity that might be better spent on improving the product.

      1. Roland6 Silver badge

        >Capacity that might be better spent on improving the product.

        Evidence to date is that capacity is used to implement UI flappery and what isn't required for that is laid off in cost-cutting to improve profitability...

        Probably the only time MS took improving the product seriously was when the shit hit the fan over the original release of XP and they spent time on testing before releasing SP2, and then SP3, which are the variants of XP everyone remembers.

      2. Mythical Ham-Lunch

        > ... requires continuous scrutiny ...

        ... from Microsoft?! Larf and a harf, surely.

        1. StrangerHereMyself Silver badge

          Well if you don't you get the world we live in today: with a continuous stream of security vulnerabilities and updates. And when the updates can't be bothered with we have long-lived vulnerable infrastructure which may be taken over by an adversary at any moment.

  33. StrangerHereMyself Silver badge

    Microkernel

    The Linux kernel itself needs to be replaced with a microkernel written in pure assembly with its device drivers and userspace utilities written in Rust. It should be a drop-in replacement so most Linux distro's can continue without even noticing.

    Linux is pure C and I doubt this will ever change in a meaningful way as this story illustrates.

    1. O'Reg Inalsin

      Re: Microkernel

      Doesn't the compiler produce assembly output, for both Rust and C?

      1. StrangerHereMyself Silver badge

        Re: Microkernel

        Yes, but hand written assembly is usually much faster than compiler generated crud. Since a microkernel can be quite small the effort may be worth it in terms of performance.

        1. O'Reg Inalsin

          Re: Microkernel

          Nowadays I would guess that improved compiler optimization, or post-compile automated optimization (incorporating real time measurement performance) would be the way to go.

        2. anonymous boring coward Silver badge

          Re: Microkernel

          Sounds really maintainable and portable. [Sarc]

          C is efficient enough. It was invented just for the purpose of portability and maintainability, while keeping efficiency.

          1. StrangerHereMyself Silver badge

            Re: Microkernel

            It simply isn't good enough. A compiler doesn't know about a programmer's intentions and has a very limited horizon for optimizing code. Context switching is performed thousands of times per seconds so even small improvements here make a huge difference.

            Maintainability aren't much of an issue because the kernel code involved is very small and almost never changes. I'm pretty sure the Windows scheduling code hasn't been touched in decades.

            1. Richard 12 Silver badge

              Re: Microkernel

              I assure you that the scheduling code has been changed quite a lot in Windows, Linux and macOS over the last few years.

              There have been some pretty significant published changes recently in both Windows and Linux to better handle higher core counts and support heterogeneous cores - 'Performance' and 'Efficiency'. Going back a little further socket affinity was a rather major change.

              macOS changed entire hardware architecture, so they almost certainly tweaked it too.

              There is more going in the depths than almost anyone realises.

    2. Oninoshiko

      Re: Microkernel

      Actually, with a sufficiently well designed IPC system, you don't even need hand-coded assembly. Just take a look at what some of the L4 kernels accomplish.

      Now, you have to be willing to unshackle yourself from the overheads of the Linux API / ABI. Of course, you can always run the Linux kernel as a usermode process, but trying to build a microkernel that is fully Linux compatible is a fool's errand. I mean, almost all of what Linux provides would have to be provided by user-mode daemons anyway.

      1. StrangerHereMyself Silver badge

        Re: Microkernel

        Hand-coded assembly will always be faster, this has nothing to do with the IPC system.

        In fact, I'm disappointed Microsoft hasn't rewritten Windows core parts in assembly. The dream of multi-architecture has been dead for decades so writing assembly for core parts is worthwhile.

        Look at what KolibriOS is doing with assembly. A full-fledged operating system with an UI in less space than a single Windows program's Help file! It's so small the entire operating system could fit inside the processor cache!!

  34. O'Reg Inalsin

    Why Rust might be a good idea, but the implementation still problematic.

    Code review to check for memory safety is a good thing. But it is really hard to actually do the work of proving that a piece of C code is memory safe. And it is also hard to leave proof that a code review was thorough and successful. Rust makes that job much possible due to it's constraints which require memory safe code unless the "unsafe" keyword is used.

    Obviously if the C code is already known to be memory safe, then nothing is gained by rewriting in Rust - in fact there is risk of introducing other bugs, although those other bugs are no at likely to be as security unsafe as memory unsafe bugs.

    The point is that almost certainly some C code in the kernel is memory unsafe, but it is unknown where that is. Currently the best way to proceed is to rewrite each module in Rust, beginning with the leaf modules. Of course Rust is "harder" than C at first, but that only because it incorporates the proof of memory safety into the coding itself, which in the long run makes life easier but getting the hard stuff done up front instead of waiting until a memory security breach is detected. It also makes it much harder for a inside bad actor to deliberately introduce memory vulnerabilities - which is currently no more difficult than introducing memory bugs by accident.

    On the other hard, if introducing Rust takes up too much time from other maintenance work, that obviously Kernel quality could suffer. In this case I am not sure why Filho was requesting Ted Ts'o to document the file system interface specs, because obviously that takes time. Someone from Filho's team could have done it, and by doing it would be ready to start implementing it in Rust. If Filho was doing it as a way to get Ted Ts'o to promise that the interface would never change, then Ted Ts'o was correct in pointing that out.

    So there might be some turf politics in play here too. I'm guessing Linus didn't override Ted Ts'o because he realizes the importance of continuing kernel support, and Filho was upset by that.

    1. anonymous boring coward Silver badge

      Re: Why Rust might be a good idea, but the implementation still problematic.

      "Rust makes that job much possible"

      It's either possible, or not.

  35. rodrigopedra

    Surname usage

    Out of topic, "Filho" is a common suffix in Portuguese when a son is named after his father. "Filho" literally means son.

    The appropriate way to refer to him is “Almeida Filho”.

    It is the same as using Junior in English. It would be awkward if he was named "Wedson Almeida Junior" instead, and being referenced as just "Junior", instead of Almeida Jr.

    If you want to abbreviate "Filho", then it could be "Almeida Fo." Although, the usage of this abbreviation is very uncommon, and more likely to appear on an academic paper's reference section.

  36. anonymous boring coward Silver badge

    I don't see evangelical language promotion as very useful for Linux.

  37. squizzler

    New languages deserve new OS

    The failure of Rust in Linux is, in my opinion, excellent news. It frees up Rust kernel developers for legacy-free projects such as HarveyOS' r9 (Plan 9 in Rust) or Redox. New generations of languages should be seen as an opportunity to explore new paradigms in OS design, not warm over old projects.

    It is time to leave Unix kingdom OS in the past and step boldly into the future. Rust in linux would only have put new wine into an old bottle.

  38. Herby
    Joke

    Obvious solution??

    ChatGPT:

    Please rewrite the Linux kernel in Rust.

    I'll wait.

  39. 10111101101

    Bindings between the languages is the worst case scenario.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like