back to article Public developer spats put bcachefs at risk in Linux

Bcachefs project lead Kent Overstreet has written about his problems with the Linux kernel folks and the code of conduct put in place to prevent such flamewars. The bcachefs team is partly funding the development of their next-gen file system using Patreon, where the project lead Kent Overstreet posted a lengthy article about …

  1. Anonymous Coward
    Anonymous Coward

    Are we reaching a monolithic limit?

    There are good reasons for Linux having a monolithic kernel and good reasons why it doesn't have a monolithic development team but that's always going to result in some tension.

    The bigger the codebase gets the greater the chance of some apparently innocent change causing some unforeseen consequence. But the older the codebase gets, the more people will want to add new stuff while fewer people will be around who know how the old stuff works. Typically, the way you'd deal with that is to modularise your system in some fashion so that developers only had to be aware of the boundaries of the modules they depended on and the internal implementation of one module wouldn't impinge on another - but that would be a rather different beast.

    I think it's amazing that Linux has managed to expand its scope so much while retaining most of its architectural principals, but that's largely down to there being one man at the centre who understands how pretty much all of it works. I'm not sure that's scalable - or even healthy - in the long run, but I have no alternative to suggest - it's still mostly not broke. And in any case, everyone will be far more interested in railing against the iniquities of codes of conduct...

    1. DS999 Silver badge

      Re: Are we reaching a monolithic limit?

      Most things being added are modules so it isn't like the kernel is growing anywhere near as fast the increases in the total lines of code being reported for it would imply.

      1. Anonymous Coward
        Anonymous Coward

        Re: Are we reaching a monolithic limit?

        Kernel modules are really just a form of dynamic linking. They reduce memory requirements by eliminating code that will not be used in a given configuration, but anyone writing a module still needs to be aware not only of the core kernel but of the impact of any proposed changes on modules that might possibly be loaded. It's the amount of knowledge developers need to have in their heads that I think may be reaching a limit rather than the physical size of the code. With notable exceptions, disputes of this kind are not usually wilful, but simply the result of different understandings of the possible consequences of a proposed change - and there's ever more to understand.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Are we reaching a monolithic limit?

          > Kernel modules are really just a form of dynamic linking.

          Exactly so.

          (And there is a strong argument to be made that dynamic linking was a mistake. That's what Ritchie and Thompson went on to conclude and they removed it from their subsequent OSes.)

          Kernel modules are not true modularity. That requires a microkernel, and microkernels are _hard_ and more to the point they're generally also _slow_. The IPC kills them.

          Plan 9 did a microkernel type architecture without the painful IPC by putting all the interprocess communications in the filesystem. This is the one true Unix way. Nobody talks about how that performs and I suspect there is a good reason for that.

          In principle, all the important elements of the myriad bits of guff that Linux has accumulated like some demented Katamari Dama-*C* (see what I did there? 'Cos it's all written in C? Geddit?) could be turned into Plan 9 servers and still accomplish the same task.

          There is one other project out there that's doing something interesting with filesystems: DragonflyBSD. Its HAMMER2 filesystem aims to deliver all the good stuff of ZFS, bcachefs etc. -- COW snapshots, self-healing without days of `fsck` etc. But they are trying to do something no other Unix project has ever done: the aim is that you can stick a HAMMER2 volume on a shared connection and multiple independent OS instances can all mount it at once.

          Normal networking means one kernel only mounts a device and it shares it. All the other machines ask the master to load or store stuff for them.

          HAMMER2 offers the promise that you don't need to do that: all the machines can mount the drive at once, with no single master.

          That's revolutionary. And that, coupled with Plan 9 in a loosely-coupled cluster, could *in theory* product a working scalable cluster like the world hasn't seen since VAX/VMS.

          1. Anonymous Coward
            Anonymous Coward

            Re: Are we reaching a monolithic limit?

            The IPC kills them

            This is historically true. And multiple protection rings - or, worse still, capability-based addressing. But virtual memory is also slow - in principle - it's just so useful that it's been made to work. And I suspect the increasing problems of security in a networked world will mean many of the other old ideas that were "too slow" will slowly come back into fashion.

            And, typically, your average, virtualised, cloud computer isn't talking to a lot of different devices, it's connected to everything via a network interconnect and that radically simplifies the requirements. I worked on VMS (not on the clustering code), but it was a similar situation: the CI simply made a lot of problems go away. In some ways, this harks back to your article about the death of Thomas Kurtz - and the early machines that used a front-end processor to drastically reduce the complexities of the operating system that ran the primary CPU. Computer history contains many circular paths.

          2. containerizer

            Re: Are we reaching a monolithic limit?

            > But they are trying to do something no other Unix project has ever done: the aim is that you can stick a HAMMER2 volume on a shared connection and multiple independent OS instances can all mount it at once.

            It sounds like you're describing a cluster filesystem. This has been done many times in the UNIX world - Veritas, GFS[2], GlusterFS etc.

            1. T. F. M. Reader

              Re: Are we reaching a monolithic limit?

              I'd also throw in an honourable mention of Andrew FS, Lustre, etc. I think (too lazy to check) that these - and some others - predate the (very relevant and correct) examples mentioned by @containerizer. In general, distributed filesystems have a long history in the UNIX world.

            2. Liam Proven (Written by Reg staff) Silver badge

              Re: Are we reaching a monolithic limit?

              > It sounds like you're describing a cluster filesystem.

              I do not claim to be an expert in this field, but FWIW, I *did* write some of the documentation for SUSE Enterprise Storage, its now-discontinued Ceph product, so I am not a complete newbie.

              No, it is not the same thing. It's akin, but in the same sense that a RAID mirror is akin to a pair of mirrored servers. Similar, but not identical.

              Ceph is the only one I've personally worked with. It's akin to a RAID where the volumes are on different machines: a bunch of nodes working together present a virtual storage volume. A redundant array of inexpensive servers, as it were.

              Clients talk to the cluster via specific APIs and ask for objects to be stored or retrieved. If you want it to look like a block device you can just dump bytes on, you need a special RBD (RADOS block device) kernel driver that can fake this.

              HAMMER2 is something else entirely.

              This would be tricky with USB but doable with Firewire, so let's imagine you have 2 Firewire computers, A and B, and a single Firewire device, C. Connect both A and B to the same device C at once. (Most Firewire devices have 2 ports, because the bus allows daisy-chaining. USB needs hubs, because you are not allowed to put more than 1 device on 1 port at once.)

              There used to be Firewire scanners and cameras. A scanner is easy: A can scan from C, or B can scan from C, but both can't scan at the same time. A webcam is easier: both A and B can stream the same video at the same time. You can't stream data _to_ a camera; it's not a projector.

              With any conventional disk format, though, if both A and B tried to mount the drive on C at the same time, the result would be to trash the disk. A wouldn't know what B was writing and _vice versa_. Result, they interfere, trample on each other, and the disk is hopelessly corrupted as soon as both try to write.

              But format that disk with DragonflyBSD and *both computers can read AND WRITE freely* to the *same volume* at the *same time*.

              The entire point here is that this is not a cluster-level tool: it's much lower level. It's a disk format, an alternative to NTFS or ext4. No fancy hardware needed, no network, no daemons or network sharing.

          3. MichaelGordon

            Re: Are we reaching a monolithic limit?

            Dynamic linking has its problems, but it makes fixing bugs/security-holes far easier. Replace a single shared library and every program that uses that library, whether it's part of the distribution or locally-added, gets the fix automatically. Imagine trying to fix a problem with libc if everything was statically linked. Possibly not so much of an issue nowadays, but it also massively decreases the amount of disk space used by executables. On my machine hello.cc produces a 16528 byte dynamically-linked executable compared to a 2403960 byte statically-linked executable.

            1. druck Silver badge

              Re: Are we reaching a monolithic limit?

              We are talking about dynamic linking within the kernel, not about general programs and libraries. In this context it's more about keeping modules independent of each other, than sharing code.

    2. Liam Proven (Written by Reg staff) Silver badge

      Re: Are we reaching a monolithic limit?

      Principles. I mean, you could say it has one architectural principal, and it's Linux. ;-)

      But you're right. I took brief looks before kernel 1.0 and only started running it after kernel 2.0, because at that time, it was essentially an alternate option in NT's niche: as a server OS, but one that was vastly cheaper than all the other options. So SMP mattered and that only started working usefully after kernel 2. It's nearly 30 years ago but I think kernel 2.2 is when it started scaling well to 2 CPUs and usefully to 4.

      I thought it was a very clever hack, bringing this gigantic, arcane, cryptic commercial/proprietary/academic OS (which I respected but didn't like much) into the FOSS world and making it _work_. Absolutely astounding, and a wonderful example of that massively overused word, _synergy_. All these different separate projects came together -- GCC, shells, editors, C libraries, some of them huge and Byzantine like X11 and its myriad window managers and terminal emulators and things -- and put them together on some punk student kid's home-made kernel and presto, you have a complete OS. A decade later, a _useful_ OS for non-specialists.

      I would never have believed that another couple of decades later it could possibly have grown so much that it ran on anything and everything, supported every filesystem and programming language and weird-ass bit of hardware ever invented, and it ran 3/4 of the world and had obliterated most proprietary OSes for generic COTS computers. Astounding.

      The norm for research and experimental OSes was that they ran on the creator's own machine and nothing else.

      Later, VMs came along, and then, they ran on that specific VM and nothing else.

      But already by the late 1990s if you wanted to know what weird old hard disk was in a random 386SX, you booted Linux off a floppy and it just told you. Saved hours of faffing around with DOS utilities and diagnostics. Quicker than lifting the lid, pulling the drive, finding the part number and putting it into this new "google" thing.

      Absolutely amazing.

      But just like economic expansion, industrial growth, and human population on a single planet, it cannot continue forever. At some point, collapse is absolutely 100% inevitable.

      1. Anonymous Coward
        Anonymous Coward

        Re: Are we reaching a monolithic limit?

        Principles. I mean, you could say it has one architectural principal, and it's Linux

        ... or even Linus.

        Despite usually having a conditioned response to double-check homophones, it clearly doesn't function early on a Saturday morning!

        And above and beyond everything you have, quite justifiably said, it also manages to run - efficiently - on a whole range of different hardware architectures.

        1. Liam Proven (Written by Reg staff) Silver badge

          Re: Are we reaching a monolithic limit?

          > ... or even Linus.

          D'oh! You're right, of course.

          One I type dozens of times a day, the other dozens of times a year...

      2. Altrux

        Re: Are we reaching a monolithic limit?

        Amazing indeed - I think I first started playing with Linux off a magazine 'cover disk' (remember those?) in the late 90s. By 2002, I switched to it full time on my main PC, accepting the compromises back then. In 2024, it now runs 3 PCs, 4 Raspberry Pis, 3 phones, and likely a few other embedded devices, just in this one house. Just one tiny corner of a Linux fleet that runs into the billions, and a system that really does run the world (and space beyond it).

        If I'm honest, I predicted that Torvalds would get bored and move on to something else before now - but he never has. Thirty three years in the hot seat, one man and his substantial brain, still driving the entire thing. It's an extraordinary story, although the majority of 'normal' people are still entirely unaware of it, despite using or remotely interacting with systems powered by his kernel all day long.

        Where will we be in another 30 years?

    3. IceC0ld

      Re: Are we reaching a monolithic limit?

      [quote]but that's largely down to there being one man at the centre who understands how pretty much all of it works[/quote]

      which leads us to THE question ?

      how much longer can he go on, we are but mortal, and he may well feel he deserves a retirement

      rather than the infinitely sadder work till you drop option

      so who IS there out there that could actually come close to filling a pair of rather generously over sized boots ?

      I will assume there are already plans in place, and moves afoot to allow for a seamless transition

      BUT, maybe when 'HE' is no longer available, maybe the entire ethos of the Linux setup will be shook up and dropped to see where it lands ?

      as posited, it is already only maintaining because one man understands where it all needs to be, and what it needs to do

      1. jake Silver badge

        Re: Are we reaching a monolithic limit?

        "so who IS there out there that could actually come close to filling a pair of rather generously over sized boots"

        It's all been discussed ad nauseam, starting a couple decades ago.

        Look up "what happens if Linus gets hit by a bus" just for a glimpse of the total conversation, and roughly what will happen.

        "because one man understands where it all needs to be, and what it needs to do"

        This is false.

        1. DS999 Silver badge

          Re: Are we reaching a monolithic limit?

          Its also false that Linus understands "everything" in the kernel. That's true for the base kernel I'm sure but he's probably not got clue one about the majority of drivers out there. Because most of them aren't drivers for another filesystem or something else he considers "interesting". Most kernel modules are something like the new fastest on the block Infiniband device which people setting up AI clusters from Nvidia B200s might care about, or some USB webcam produced by a Chinese company no one in the west has ever heard of. Stuff none of us will ever encounter, or use.

          If you took a "poll" of Linux systems that Register readers used or were responsible for I'll bet the total number of unique loaded modules would be a low single digit percentage of the total number of kernel modules in the latest kernel release. Linux kernel modules are the ultimate long tail, and that long tail does not contribute any complexity to the base kernel.

    4. FIA Silver badge

      Re: Are we reaching a monolithic limit?

      I've said it before and I'll be flamed for saying it again.

      Linux needs a suite of well defined driver layers. These need to be documented, versioned, and stable....

      If I could write my fs to Linux_FS_ABI_v1 then I'd know it works on any kernel that supports that ABI. I wouldn't have to re-compile every five minutes, and I could ship updates to my driver independently of the kernel.

      Now I do understand that the level of co-operation and discipline required for this to happen wouldn't work well with a GPLd project... (as it would make life easier for people to not do the GPL thing...) but I can dream.

      And before the inevitable... 'Just write it' comments come in... I'm not that good a developer... but I am a good enough developer to realise that doing something like this is way above my skill level and interpersonal skill level. (You'd need someone as irritating as Pottering to herd all the sheep, and that's something I couldn't do myself).

      1. DS999 Silver badge

        Re: Are we reaching a monolithic limit?

        It wouldn't matter if you wrote it, Linus would reject it. He's stated his reasons for not wanting standard kernel ABIs in Linux many times. You may not agree with them, but until he's replaced he's got the final say on that matter.

  2. Anonymous Coward
    Anonymous Coward

    we assure you, the FOSS desk is an entirely LLM-free zone

    ... and with a name like LLiaM, who could doubt it?

    1. Korev Silver badge
      Linux

      Re: we assure you, the FOSS desk is an entirely LLM-free zone

      He's Proven not to be AI

      1. HuBo Silver badge
        Alien

        Re: we assure you, the FOSS desk is an entirely LLM-free zone

        "[Patreon] didn't believe this vulture was a human" ... well, a vulture with such a penchant for Plan 9 might indeed be LLM-free, yet not entirely human ... likely sympathizing with outer space ... (and ain't we all?!)

  3. rgjnk
    Devil

    Personal investment

    "genius-level intelligence and intense personal investment in their projects."

    I guess there's an element of ego involved in this, personally I've never got *that* invested in a project even if I'd staunchly defend it again the inferior alternatives or any criticism. Maybe the trick is having an ego big enough to just not care?

    Tying yourself too closely to a project is a trap - you'll spend too much time on it well past the point where it stopped being interesting and worse there'll be a long long tail of people asking for your input after you've moved on.

    Plus who wants to be a one trick pony when you have the ability to do other things too?

    On another note, it's hardly a great product development route where it's driven by clashing egos and you're all dependent on each other for things to happen. It's 'open' but not really...

    1. Anonymous Coward
      Anonymous Coward

      Re: Personal investment

      > Tying yourself too closely to a project is a trap - you'll spend too much time on it well past the point where it stopped being interesting and worse there'll be a long long tail of people asking for your input after you've moved on.

      > Plus who wants to be a one trick pony when you have the ability to do other things too?

      One could say the same for romantic partners, yet many people happily and productively stick to a single love for the rest of their life once they've found it! You shouldn't assume that everyone has the same proclivities (or ADHD) as yourself.

      1. Anonymous Coward
        Anonymous Coward

        Re: Personal investment

        Generally your ex's new flame doesn't come asking you to consult on how to interface with her, however.

  4. Adair Silver badge

    Every tree...

    has it's natural life span. Once it's fully grown it's just a matter of time before senescence sets in. Hopefully, before that process becomes too advanced the tree will have reproduced, but even if not there is usually another nearby tree for the creatures that live on and/or feed on it to move onto.

    OSes are not really that different. It's just a question of time.

    1. chasil

      Different tree

      Perhaps bcachefs would be better placed in a different kernel, say NetBSD or Illumos.

      Linux has a particularly fragmented filesystem landscape, which is mostly Oracle's fault, but perhaps some is the community itself.

      Demoting Linux to the status of a second class citizen would mirror ZFS, and is perhaps the right step.

      1. Liam Proven (Written by Reg staff) Silver badge

        Re: Different tree

        > Perhaps bcachefs would be better placed in a different kernel, say NetBSD or Illumos.

        No, you are missing the core point here, the reason the project exists.

        There are more sophisticated filesystems out there. Some are FOSS, notably ZFS. But they are not GPL. There are lots of FOSS licences: this is itself a problem. Some are compatible with one another, some are not. GPL3 is stricter than GPL2; LGPL is much looser than GPL 2 or 3. Neither GPL 2 or GPL 3 say much about network services, so there is AGPL. Etc. etc.

        Linux is GPL2. You mustn't put GPL3 code into a GPL2 kernel, just as you mustn't put BSD-licensed code into a GPL kernel or GPL code into a BSD-licensed kernel.

        Think of the licenses as being another compatibility layer: BSD can talk to GPL and GPL can talk to BSD, but you need a layer in between. You can't mix them: they are oil and water, immiscible.

        The *reason* bcachefs exists is that it's GPL2 and it is being built _for the only successful GPL2 kernel_, which is Linux.

        There are perfectly good next-gen FSes out there _but not under GPL2_. The need was for a GPL2 next-gen FS. That FS is bcachefs.

        Illumos, FreeBSD, and NetBSD don't want it. They're not interested: they have ZFS.

        OpenBSD doesn't want it: it doesn't even want ZFS. It's too big and complicated for those guys.

        DragonflyBSD doesn't want it: it's doing its own thing which is more ambitious than bcachefs, or even than ZFS.

        The argument is not about whether all those licenses are good things or not. That is a whole other discussion which is not about code.

        The thing most people miss talking about this is that _different people are different_. What some like, others hate.

        E.g. What the GPL folks see as the weakness of the BSD licenses -- that you can use the code in proprietary products -- is *what the BSD folks like about them.* What one side sees as a weakness, the other side sees as a strength.

        They aren't going to go away.

        "One has millions more users than the other" is not an argument here. Billions believe in Roman Catholocism but that doesn't make the Protestants go "oh, hey, their form is more popular, we should change." Size is no guarantee of strength or worth. Some factions perceive being smaller and more selective as a good thing.

        Billions love football. I detest football. To football fans, the type of football matters a lot: lots more like soccer than rugby, and there are 2 types of rugby and I don't even know whether the other countries who play rugby play league or union. I don't know because I don't care: it's still a kind of football and I loathe and despise all ball games, and it makes no particular difference to me if the balls is spherical or not, or if there is one per side, or 2 or 10 or 11 or 12, or whether they hit the balls with implements or not, or if the balls are big or small. It's all sportsball and I don't like sportsball.

        It makes no difference to me that most of the world loves it. It's not a factor. I don't and the fact that lots of loud people love it and shout about it and wear the clothes and wave the scarves and sing songs about it _only makes me hate it more_.

        The BSD folks do not perceive the greater volume of GPL code as a good or desirable thing. They are happy with their way.

        Me, personally, I am neutral. It's all FOSS and I like FOSS and which flavour of FOSS it is matters not to me. I use non-FOSS freeware, too. I am not Stallman, not a puritan, not strict.

        It is a grave mistake to think that one of these licenses is "better" or "worse" because one shifts more units, because *it is not about shifting units.*

  5. habilain

    It's worth noting that while Overstreet may be good as a filesystem developer, he's not particularly great as a kernel-dev. He's been remonstrated multiple times for sending pull requests for code which he's either not tested adequately or not tested at all. When offered advice on how he could improve his testing, he's complained that no-one else can understand the tests he needs to do (but apparently isn't doing) and therefore no-one else can help him set up test harnesses. I suspect this is part of the reason why other parts of the kernel don't want much to do with him; they don't particularly trust his code, and don't want it in their subsystems. The fact that he's kind-of a jerk isn't helping him; aside from the current situation, apologising for problems that he created (like breaking big-endian kernels - more than once, as I understand it) would go a long way to resolving the situation.

    1. HuBo Silver badge
      Holmes

      "Get your head examined."

      "And get the fuck out of here with this shit." (both linked under "intemperate response" in TFA) do tend to support the "kind-of a jerk" aspect of this here bcachefs spat, indeed (imho)!

    2. ecofeco Silver badge

      I find this quite believable. I've met far too many people just like that.

    3. Tom 38

      The whole exchange was drearily predictable - Overstreet launches the f-bombs, its resolved off thread, but Overstreet won't apologise for his language and writes a 10k diatribe on why he's okay to behave like that..

      Literally the whole spat would be over if he had posted "I'm sorry, I got emotional and overreacted"...

  6. Groo The Wanderer

    Turfing a project just because someone was swearing is petulant and childish, ignoring any and all benefits of said projects.

    Grow up, kiddies. Play nice in the sandbox.

    1. JoeCool Silver badge

      go read his correspondence

      a month or so ago the reg reported on his warnings from Torvalds. that email exchange makes it clear that he has bigger issues. he is argumentative to the point of ignoring serious discussion, and only looking to dodge responsibility for his words and actions. he is more interested in winning the argument than fixing the problems. he ignores exllicit mandates. he dodges giving straight answers.

      in other words he is unmangeable, and on a large project the ability to work with others is paramount.

      1. Anonymous Coward
        Anonymous Coward

        Re: go read his correspondence

        So, if it's so good, why don't they fork it, and take it over!

        1. JoeCool Silver badge

          Re: go read his correspondence

          Now, there's a question ...

          perhaps "so good" is not to be assumed and can only be proved by a sustained run in the kernel, and subsequent distro builds ?

          Or perhaps the limited developer resources available have higher priorities ?

          But probably the right answer is the far more mundane: "push" has not yet become "shove"

    2. CowHorseFrog Silver badge

      Isnt Linus well known for calling people names ?

      1. alisonken1

        It's a case of overhype (somewhat).

        Yes, Linus rants are infamous when they go out. No, the rants really do take some time to build up.

        He never just blew up for no reason and not on the first try. With that said, if you were a long-time kernel developer and tried to post something that was not correct, then yes - you did get an eyeful of email/postings.

        If you were just starting, then he would encourage you and give directions. As long as you were receptive to advice and mentoring, you never received a rant.

        1. JoeCool Silver badge

          I didn't say a thing about language

          CHF -

          I am quite alright with explicit and unambiguous speach. Even colourful extended metaphors involving gender equipment and related activities.

          My point is wholly about attittude. If a diatribe isn't driving toward a solution, it's just verbal puffing and release.

          And for the record, yes Linus has definitely crossed that line, but he came back.

      2. jake Silver badge

        No. Linus was well known for calling idiots idiots.

  7. mark l 2 Silver badge

    "In case you have difficulties opening Kent Overstreet's blog, Patreon has protective measures in place, which gave us problems on some browsers. When we were using Waterfox, it didn't believe this vulture was a human (we assure you, the FOSS desk is an entirely LLM-free zone). You may have to try a few different browsers or machines."

    NOPE NOPE NOPE! im not installing another browser just cos some shitty devs can't be bothered to test their website in anything other than Chrome. Fsck you Patreon!

    1. ecarlseen

      Could just be me, but it I find it works fine with Safari and standard Firefox.

  8. Anonymous Coward
    Anonymous Coward

    Kent doesn't need to interact on the mailing list

    He could just work on his file system and someone else can submit code pushes.

    I suspect that no one wants to work with him, though...

  9. Tron Silver badge

    If geeks got axed for swearing in development disputes, we would all still be using typewriters.

    Linux's model is flawed. I would like something a lot simpler, funded by donations from all those wealthy tech millionaires and billionaires. Maybe they could give something back and still buy an island full of bikini clad women to retire to.

    An OS* that does the basics of processing and networking permitting simple programs to run with file compatibility for data. The focus should not be on keeping up with the Jones's - Apple and MS - but on producing something that is reliable and secure. You want to play games, buy a console. You want AI BS, go for Windows. Just do something simple that businesses and local government can use for day to day operations, and everyone else for surfing. Backwards compatibility for a minimum of 30 years, with very occasional updates, fully tested, breaking nothing. With a modular user customisation package that allows companies to adapt the core software to their needs cheaply, without issues. Anyone wants to produce anything else on it, go ahead. Plus, let them charge for it in return for paying a tithe of all income to the group maintaining the core OS.

    *OS, not kernel. Having a thousand different Linux distros, most vanishing after a few years, has undermined the entire ecosystem.

    The whole point of FOSS is that it doesn't need to be novel and adopt the latest gimmicks to out sell competitors. How about embracing that and producing something useful, dull, pedestrian, but free and rock solid. Computing without the circus and pain. Computing that just works on absolutely any PC from the last 3 decades. Lower your expectations and go back to the future.

    1. Anonymous Coward
      Anonymous Coward

      Re: If geeks got axed for swearing in development disputes, we would all still be using typewriters.

      What you desire is called FreeBSD

      1. Orv Silver badge

        Re: If geeks got axed for swearing in development disputes, we would all still be using typewriters.

        And it's systemd-free!

    2. doublelayer Silver badge

      Re: If geeks got axed for swearing in development disputes, we would all still be using typewriters.

      You want someone to build something basic, defined as having everything you want and nothing else, and you want it to be paid for by other people voluntarily choosing to give the project money even though some of them directly compete with it and the rest probably won't be using it because they might want something more modern than you want? Sounds great, but it's not going to happen. Don't expect it.

      Nothing stops you from designing the lite OS of your dreams. The general problem with lite products is that a lot of people find that, while they certainly don't need all the things that large products have, there are two or three little features that it has and your version doesn't which they need or want. Trying to insert them into your lite version increases the support requirement. Trying to use the lite version without them is annoying. If it's a commercial product, many users will end up living without something that would be too expensive, but if it's open source, they will move where they need to move so they can add them. Linux has been successful because throughout its development, you could add more and more things. You can still remove many of them and have a small and light build, but that is very different from one that doesn't support them, which would be unsuitable for a lot of people.

  10. Alan Mackenzie

    To Liam: Clarification over the state of the Emacs project vis-a-vis CC Mode.

    Hello, Liam.

    Yes, it is true that I have resigned from the Emacs project, and I have to say I'm not proud of my actions which, together with the actions of others, led to the disagreement which forced me out.

    CC Mode, as well as being a part of the Emacs core, is also a stand alone project. I intend to continue running that stand alone project, and will cooperate with the Emacs core maintainers to ensure that bug fixes and fixes for language changes (e.g. the three yearly C++ standard) filter through to Emacs itself.

    I will no longer be participating directly in the development of the Emacs core, that's all.

    Incidentally, there are new modes based on Tree Sitter in Emacs which are faster (usually, but not always) than the corresponding CC Mode modes, and though they still have rough edges, it's not inconcievable that these new modes will someday supersede CC Mode and other traditional major modes entirely.

    Emacs continues to be a useful and supremely user friendly (as contrasted with beginner friendly) program.

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: To Liam: Clarification over the state of the Emacs project vis-a-vis CC Mode.

      > Yes, it is true that I have resigned from the Emacs project, and I have to say I'm not proud of my actions which, together with the actions of others, led to the disagreement which forced me out.

      Thanks for this, and for the comment.

      I am sorry to hear of it and of anyone feeling they had to leave.

      > Emacs continues to be a useful

      This is believe.

      > and supremely user friendly

      That would have made me spit out my tea, though.

      > (as contrasted with beginner friendly) program.

      Aha. An interesting distinction. Not sure I altogether share it, but an interesting take.

      1. captain veg Silver badge

        user friendly

        As a CS student in the mid-eighties, I spent an awful lot of time writing code on dumb terminals. Our main weapons were line editors.

        One of my near-contemporaries (actually a year ahead) created a screen editor named EDT, modeled, I understand, on a DEC program of the same name. We weren't using DEC machines, so this was some feat.

        Word got around pretty quickly that EDT let you do weird and wonderful things like seeing your entire source code file through a scrolling viewport and navigating it using the arrow keys on our dumb terminals. Just so long as it wasn't a teletype. There were still quite a few of those around. I dare say that the program was closely tied to the particular dumb terminals that we actually had on campus.

        Then we were let loose on Unix, and someone got Emacs working on it. This was almost as obvious to use as EDT but programmable! About that time I splashed out on a DOS PC, and was extremely pleased to discover that you could run Emacs on that too. Add in a 300 baud modem and suddenly I didn't even need to visit the terminal room on campus.

        So, I have good memories of Emacs. Haven't actually used it in nearly 40 years.

        -A.

    2. sabroni Silver badge
      Headmaster

      Re: user friendly (as contrasted with beginner friendly) program

      We need to have shared understanding of words or they become meaningless.

      User Friendly means people who don't have experience with the software can use it easily. If you have to be familiar with the product before you can use it that's not user friendly it's just an interface that you have become familiar with.

      1. jake Silver badge

        Re: user friendly (as contrasted with beginner friendly) program

        Arguably, people who are unfamiliar with the product aren't users, almost by definition. People learning to use a product also aren't users, as they are still learning to use the product. It's not until you are actually comfortable using the product that you become a user.

        C, Fortran and COBOL are all user friendly; INTERCAL, not so much.

        The SysV and BSD inits are user friendly; the systemd-cancer, not so much.

        1. doublelayer Silver badge

          Re: user friendly (as contrasted with beginner friendly) program

          By that definition, literally everything is user friendly. Systemd is very user friendly; if you know how to use it, then you're a user, if you don't know yet, then you're not a user, and if you don't know how to do it as quickly, then you're not a full user either, the category in which you would seem to be given your choice to avoid it.

          That definition only works in tautologies. A lot of people are users of something without being expert administrators of it. There are a lot of systemd users who use it because that's what they're using to manage services, and on that basis it can be less user friendly than alternatives if it is harder for those users to make it do what they want. Whether it actually is is not required for this discussion, so I'm going to opt out of that particular fight/lecture, but at least with that definition, the adjective means something and can be used in comparisons.

          1. druck Silver badge

            Re: user friendly (as contrasted with beginner friendly) program

            The core of systemd, allowing you to write init scripts is quite friendly i.e. what the damn thing was supposed to do. However, all the crap that has been added to it, with it's tentacles crawling in to ever more areas of Linux, and replacing other perfectly good programs with arcane and often buggy implementations which Peottering WILL NOT FIX, is why there is so much entirely justifiable hate.

      2. Daniel Pfeiffer

        Re: user friendly (as contrasted with beginner friendly) program

        That’s a subjective disctinction. Something may need learning, but then be user friendly, or still not. Emacs to me is in the 1st category.

        I am trying VSCode in parallel, and maybe there are just too many features I haven’t yet discovered or configured. But why does double clicking on parens not select the whole construct? Why is there no difference between double clicking the underscore to get a whole snake-case identifier, vs. just one of its words? Why is there no obvious way to get undo only in the selected region?

        VSCode may be capable of such highly comfortable features, but they’re not obvious. In this sence Emacs is more user friendly!

  11. CowHorseFrog Silver badge

    Linux has a fundamentally broken architecture.

    Filesystems should not be running in the kernal level, then this and other FS bugs and other problems would not be as dangerous etc.

    1. jake Silver badge

      If Linux is as bad as you are making it out to be, Shirley you should stop using all products containing Linux immediately, if not sooner.

      We'll wait.

      1. CowHorseFrog Silver badge

        I would suggest you buy a book about what OS archiecture is...

        and maybe then write to Apple because they recently moved the File system out of their kernal... but hey y9ou obviously know more than them.

        1. druck Silver badge

          I'm sure you could lend Jake yours, after you've finished colouring in the pictures of course.

  12. Fruit and Nutcase Silver badge
    Mushroom

    Too many

    bcachefs

    in the kitchen.

    Or may be what is needed is more /the right one.

    May I recommend...

    https://en.m.wikipedia.org/wiki/Swedish_Chef

    "Bork, bork, bork!".

  13. MarkMLl

    Not quite how I read it.

    "...executive summary might be that the Rust folks are proposing changing how the C parts of the kernel work slightly in pursuit of cleaner, more reliable code."

    I'm not sure that's entirely accurate. Having read the transcripts shortly after that particular spat broke out, the complaint was that there were a lot of poorly-documented edge-cases in an inconsistent API, which demanded poorly-documented use practices to deliver reliable results.

    The proposal was not for the API to be changed to suit Rust. It was for the API to be cleaned up and- among other things- documented, which would have obvious advantages for all.

    The existing maintainer reacted like a true prima donna, which was not to his credit and reflected badly on the entire community.

    MarkMLl

    1. Liam Proven (Written by Reg staff) Silver badge

      Re: Not quite how I read it.

      > The proposal was not for the API to be changed to suit Rust. It was for the API to be cleaned up and- among other things- documented, which would have obvious advantages for all.

      Hi Mark, and thanks for the comment.

      The thing is: I didn't say the suggested changes were to suit Rust.

      I said, as you quoted in fact:

      « in pursuit of cleaner, more reliable code. »

      The core of the argument here, ISTM, is that the Rust folks are suggesting changes _to working C code_ which would make it better _for other C code as well as Rust code_... but the C-wielding old-timers are offended by the proposal of changing working code. It works, therefore, leave it alone. "If it ain't broke, don't fix it."

      Which is why I specifically said that by their own lights _both sides are right_.

      That is the hardest kind of disagreement to resolve: when both sides are correct, but they're judging by different criteria.

      And as I said in an earlier comment: I personally don't have skin in this game. Me, in an ideal world that's probably unachievable, I'd like to see Linux banished to a compatibility layer, running in a VM so legacy code works, that VM running on a vastly smaller simpler OS.

      As I wrote over a decade ago -- when Netware was waning in power, the company did have a route out but it didn't take it. As the limitations of Netware as a server OS became critical, it was possible to run Netware in a VM under a more capable OS, which had SMP support and native apps and a GUI -- OS/2.

      https://www.theregister.com/Print/2013/07/16/netware_4_anniversary/

      But Novell chose to try to grow Netware instead, and give it a GUI, and Java, and SMP, and all that sort of thing.

      Within a decade (Netware 6 was 2001, I think) it collapsed under its own weight. Fair, it lasted longer than OS/2 as a commercial product.

      But I bet people are still running Netware in production somewhere, and I know people are running OS/2: ArcaOS is an existence proof.

  14. Anonymous Coward
    Anonymous Coward

    At some point a discussion should be limited to the people involved, not half of the internet. But then I'm breaking that rule as well, so perhaps time to delete my account and be consistent.

    1. Liam Proven (Written by Reg staff) Silver badge

      > so perhaps time to delete my account and be consistent.

      Which account is that, then?

  15. AdamWill

    disagree!

    "Overstreet's post is over 6,000 words long, but in The Reg FOSS desk's opinion, he does in fact make his case fairly well."

    Um. Really? I disagree *heavily*.

    As a general principle, I'd say that anyone who can't manage to make a simple apology for writing "Get your head examined. And get the fuck out of here with this shit." when asked to, but *can* bash out a 6000-word self-exculpatory blog post full of entirely irrelevant technical detail, is probably not someone I'd want to have to deal with at work.

    If you do something jerkish, own up and say sorry, then move on. If you spend weeks or months, and thousands of words, on "actually I wasn't being a jerk but also I was right and lots of other people historically have been jerks so why am I being called out?! It's unfair! I'm the victim here!", um, that's a bit of a red flag for me.

    1. doublelayer Silver badge

      Re: disagree!

      That all depends on whether that technical detail is, in fact, irrelevant. I wasn't on the original discussion, so if his description of the consequences of different memory failure modes is just lies, then maybe I wouldn't know that. If what he writes is true, then there may be more issues than unprofessional language, not all of them his fault. Of course, since you say they are irrelevant, maybe you can explain why he was incorrect about the technical detail. Angry words, while they can cause some problems, are less of my concern than the technical disagreement, and they are by no means new to the Linux kernel devs.

      1. AdamWill

        Re: disagree!

        It doesn't matter whether he's right or wrong. That's what "irrelevant" means. It's never necessary or appropriate to talk to someone like that even if they are, in fact, entirely and unambiguously wrong. It doesn't help anything. It only causes trouble.

        1. doublelayer Silver badge

          Re: disagree!

          A lot of things can be relevant depending on the context. Yelling at others tends not to help. However, in my opinion and experience, yelling at people as a first step because that's what you like doing is pretty different from yelling at someone after they've caused massive problems several times in a row and insists on continuing the behavior that breaks them. The response to both situations might be to tell the yelling guy that he should not react like that, but the latter situation will not be entirely fixed by doing that because the problematic situation resulting in the argument would still be around, so you would have to take an additional action. Finding out which of those or where on the spectrum between them we are can help to deal with it appropriately.

          The problem I see here is that some are trying to identify a right and wrong side here. Either Overstreet is correct about the technology, so shouting was completely appropriate, or shouting like that is never appropriate, so whether he was correct or not is irrelevant. Both those approaches are guaranteed to get at least one thing wrong which will cause trouble in kernel development, what both of the people involved theoretically are doing this for. In the situation where Overstreet's technical complaints, in short that proposed changes will cause existing functionality to break and possible security vulnerabilities in the kernel, are correct, then he may still need to be told not to use the language he has, but the person suggesting the changes needs to be told not to implement those changes, and attempts to do so need to be inhibited. If Overstreet is wrong, then he still needs to be told not to use the language, and his mistaken understanding should be countered by official guidance about what other developers who are watching this and implementing something connected to this should do. Or in other words, both branches allow for the CoC group to take action about the language, but I also want to see the technology problem resolved because failing to do so will have significant problems for future development in this area. Ignoring that because of angry sentences in an email would be dangerous. Deciding about it because of angry sentences in an email would be even worse.

  16. Orv Silver badge

    There's always that guy who argues, "I'm smarter than all of you, so the code of conduct shouldn't apply to me."

    Such people are only rarely worth the trouble. They may get things done but they're energy vampires to the rest of the team.

  17. RAMChYLD Bronze badge

    This is going to be sad

    Given how much issues there are running ZFS on Linux right now (a kernel upgrade can and will break your module), BCacheFS looked like a solution. But nope, things has to go wrong.

    Sad.

    And no, A kludged together solution like running btrfs on top of bcache and LVM is not the answer. Too many points of failure that if one occurs, you can't really work out what failed.

    1. collinsl Silver badge

      Re: This is going to be sad

      (a kernel upgrade can and will break your module)

      Are you running some custom ZFS modules or something different about your kernel? Or are you on a really old version of Linux or ZFS? I've been using ZFS on Linux for years on CentOS 7 and Rocky 8 and have not had this happen since the version 1.x days of ZoL about 3-4 years ago.

  18. jimsinenomine

    Talk to each other

    These Linux mailing list spats are reminiscent of bad email habits that emerged particularly in 90's and 00's corporate settings. Email is good for awareness, sharing, soliciting input, etc. It's BAD for resolving problems. So, schedule a web meeting and talk it through. This is what happens nowadays in most business settings and it by and large works well.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like