back to article 'Bulls%^t! Complete bull$h*t!' Reset the clock on the last time woke Linus Torvalds exploded at a Linux kernel dev

Linux kernel chieftain Linus Torvalds owes the swear jar a few quid this week, although by his standards this most recent rant of his is relatively restrained. Over on the kernel development mailing list, in a long and involved thread about the functionality and efficiency of operating system page caches, firebrand-turned-woke …

  1. Kevin McMurtrie Silver badge

    Machine learning

    A long, long time ago there were experiments in using Machine Learning to tune caches. You'd preserve some lightweight element metadata long after the elements were purged. You can then examine the metadata history to calculate how different eviction strategies would change cache efficiency. It was a continuous and automated form of the testing that would normally be manual and hardcoded during product creation. If you're really fancy, you can break it down by types of cache elements.

    I thought this was more common today in performance critical caches, but maybe Linus screamed it out of Linux.

    1. Anonymous Coward
      Anonymous Coward

      Re: Machine learning

      I still don't understand why some bulk tiny file copies (either tons of user docs, a game that saves every entity as a new file etc, massive photo libraries) as a ZIP folder does.

      Copy the Zip (granted you need to compress first, but even at lossless compression, if just "one file"), it copies normally and fast.

      Try to copy each folder, and the OS dies in millions of file lookup table edits and writes.

      I know there are some technical reasons, but why can it not first copy a large chunk of the data, then bulk update it's file system?

      As said, the ZIP (or some exe/cab file systems) do this. Some torrent/download/file pre-allocation systems allow for the space to be reserved... basically I can see all the tools being there, but none being used except "Lets move each bit at a time, then update every single file structure on the hdd each time" in excruciatingly slow fashion.

      Ok, on an SSD this is now practically a non-issue, but on HDDs it takes forever.

      1. Anonymous Coward
        Anonymous Coward

        Re: Machine learning

        For each file there is other information being copied and created on the HDD (header for example) and when copying a zip that data is only copied once and the data in the zip itself contains all the other information. Time wise if you account for creating and extracting the zip (storing all that information) there probably wouldn't be much difference in time anyway. In theory you are probably making it slower.

      2. SImon Hobson Bronze badge

        Re: Machine learning

        Try to copy each folder, and the OS dies in millions of file lookup table edits and writes.

        The issue here is that AT ALL TIMES the system needs to ensure that the filesystem is in some semblance of a consistent state. So for each file, it needs to :

        • find some empty space and allocate it - and make sure that anyting else looking for space now knows that it's not free

        • copy the data into that free space

        • add all the information that the filesystem needs in order to know where the file it etc

        • add in all the other stuff about the file - attributes, directory entry, and so on.

        Yes, in theory you could write code that would recognise that you are copying a million tiny files - and do each step a million times before moving onto the next step - but that would be relatively complicated code compared to doing "for each of a million files, do this standard process".

        So yes, if you copy your ZIP file, there's one space allocation, one data copy, one metadata creation and update. But when you copy the files individually, each of those steps will be performed for each file.

        If you take a step back, it would be "rather difficult" to reliably handle your "copy a million files more efficiently" process. The main issue is that the filesystem code (the bit that's doing all these complicated updates to the filesystem data structures) doesn't know what's being copied. Typically you'll invoke a user space program that does the "for each of a million files; copy it" part - and there are a number of different user space programs you might use. Underneath that, the filesystem code just gets a call via a standard API that (in effect) says "here's a chunk of data, please write it to a file to be called ..." - so all it can do is create each file when it's told to.

        To do the more efficient "allocate space for a million files, copy the data into that space, create all the file metadata for those files" process could be done - but it would be harder to do and hence more error prone.

        You could write your copy program so that it will :

        • Call the appropriate API to create each file and specify how big it will be

        • Copy the data into each file

        • Set the metadata on each file

        • Close all the files

        Sonds simple - but you need to consider system constraints such as limits on open files. So next you end up having to split large lists of files into smaller groups, and so the task gets more and more corener cases to handle.

        Caching complicates the process, as does journalling. Once you have caching, then you have the risk that data you have written into the filesystem doesn't get written to the disk (eg, on power failure) - and worse, that some of it might while other parts didn't, and potentially with the data handled out of order.

        So for example, it would be possible for the directory entry to be created etc - but without all the data actually having been written into the file. So the user sees a file (after things have been cleaned up after the power failure) - but what's in it isn't what should be in it. This is just a simplistic example.

        So journalling deals with this by writing a journal of what's being changed so that after the crash, the journal can be used to either complete a process or roll back those bits that did happen - so either the file was copied, or it wasn't, no "it's there but it's corrupt" options. Again, somewhat over simplistic but you should get the idea.

        But journalling adds considerably to the disk I/O needed - there's no free lunch, what you gain in filesystem resilience you lose in performance. Different types of filesystem have different tradeoffs in things like this.

        1. Ken Moorhouse Silver badge

          Re: Problem with zip files...

          Problem with zip files is that all of the files that constitute it should ideally be from the same time snapshot.

          Saving time should not be a consideration when compressing certain types of data. Documents and cat photos? Possibly ok. But imagine taking a zip of some accounts files where (say) the Audit Trail is split into several files and, during the zip process, the application comes along and changes those data files in a way that straddles the zip operation.

          As I have no control over how the OS behaves when compressing I would get everyone out the program before attempting such an operation.

      3. Carpet Deal 'em
        Facepalm

        Re: Machine learning

        First, zips are single files; even if there's no actual compression, copying one is fundamentally the same as copying any other single file.

        Second, torrented files are expected to have holes in them; copies are expected to be complete. The occasional corrupt copy from an interrupted operation is acceptable; an entire directory tree of unfinished files is not.

        What you want would involve cooperation from the very structure of the file system itself. Needless to say, no major FS implements the necessary components.

      4. the spectacularly refined chap

        Re: Machine learning

        I still don't understand why some bulk tiny file copies (either tons of user docs, a game that saves every entity as a new file etc, massive photo libraries) as a ZIP folder does.

        Copy the Zip (granted you need to compress first, but even at lossless compression, if just "one file"), it copies normally and fast.

        Try to copy each folder, and the OS dies in millions of file lookup table edits and writes.

        I know there are some technical reasons, but why can it not first copy a large chunk of the data, then bulk update it's file system?

        Look at this from the system call level: there is no generic "copy a file" system call yet alone one to copy directories. Tools like cp open the source, create the destination, manually manipulate the permissions as appropriate, and enter a loop reading a bit of the source and then writing it to the destination. Finally the file times are adjusted as appropriate. Based on this there simply isn't the scope to express higher level functionality in the manner you describe. It could be added but that adds a lot of bulk to the kernel and the underlying filesystem code (which potentially has to keep track of gigabytes of incomplete writes) which is probably undesirable for what in the grand scheme of things is a comparatively rare batch process.

        The Windows API is far richer but that is at a higher level - the system call layer is AFAIK still publicly undocumented. Instead you are on top of a subsystem appropriate to the executable format and a DLL exposing that functionality. Internally it is likely that those higher level requests get distilled down to the same kind of primitive as for a Unix machine.

  2. Palpy

    Whew.

    Torvalds got sweary. I'm relieved. It's like the sky is still blue.

    I guess OS complications are complicated, and sh*t happens. Linux is my daily driver, and I'm very glad it's there and glad it runs as well as it does.

    1. Detective Emil

      Re: Whew.

      Yup. I initially read the forum name as Kernel development railing list,

    2. oiseau
      Pint

      Re: Whew.

      I'm relieved. It's like the sky is still blue.

      Indeed ...

      Was about to post the same thing/idea.

      Have a beer --->

      O.

  3. sitta_europea Silver badge

    The guy's been at it for thirty years. If he isn't more than a bit burned out by now he isn't human. Cut him some slack.

    1. Anonymous Coward
      Anonymous Coward

      You can be burned out,

      Without having a go at others. A simple "not a bug/won't fix" would have sufficed for now.

      1. Adrian 4

        Re: You can be burned out,

        Nobody's perfect. Don't expect them to be. Sometimes stuff gets to you.

        At least he doesn't monkey-dance around the stage for PR.

    2. cdegroot

      Nothing new...

      Sorry - he was a moron when he debated Tanenbaum in a highly, err, "interesting" way and he's a moron now. I still am sad that we have to deal with Linux and not some well-architected kernel lead by competent people. Linux was just an accident - the wrong code at the right place and time, like that other horror show from Finland, MySQL. Both filled a gap where quality did not matter and both have been picked up and cared for by competent people that are not their original creators. In Linux' case, only to be yelled at.

      The _real_ gem that Torvalds created, and I grant him that, was Git. Design-wise, then, as in "this is how a distributed version control should work". Alas, the UI is obtrusive, even by Unix command line standards, and I blame that as being a major reason that the distributed aspect of Git is hardly used and we're in a situation that development teams "can't work, Github is down". Still hoping that someone will fix that but with M$ now having $$$$$ interests in a centralized development model, not holding my breath.

      1. anoncow

        Re: Nothing new...

        Yah, well. Maybe. You have to keep in mind that Chinner can be a bit of a dull thud at times, and a flipping dickhead on top of it. Pay attention to what guys like Jan Kara have to say, or some of the less aggressive ext4 guys. Chinner is all politics, all the time, and sadly unsocialized. Makes Linux look like a pedigreed gentleman by contrast.

      2. oiseau
        WTF?

        Re: Nothing new...

        ... was a moron ...

        ... and he's a moron now.

        Linux was just an accident ...

        Hmm ....

        There's only one moron around here.

        If you look in the mirror you'll catch a glimpse of the guy.

        O.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nothing new...

          While the rest of the rant may be wrong, Linus is a moron. His words show that much. He was supposed to have that all figured out with his little time-out, but he must not have spent enough time in the time-out chair. Someone hand him a dunce cap and send him back.

      3. TheMeerkat

        Re: Nothing new...

        You forget that for all these years he kept the kernel development working without been drowned by multiple inconsistent features added by different developers who don’t report to him as developers are when they work for companies like Microsoft.

      4. Anonymous Coward
        Anonymous Coward

        Re: Nothing new...

        So we can add caches to the long list of things Linus doesnt really understand. I get the impression he has nt even read the nice clear introduction to the subject by Hennsey and Patterson. First published around the same time Linus was ripping off Minix.

        To those of us who read the source code for Minix back in the late 1980's the whole cult of Linux is somewhat mystifying. Its successful not because it is good, but because it was free. Much like GCC. Its actual not very well written. Look at the source for Solaris or the NT kernel for some nicely, or rather nicer, written code. Whenever I've had to dive into kernel level code in Linux its always a case of brace yourself, its going to get ugly. Very ugly.

        As for git. I have always found it a very good litmus test for whether the person actual understands how version control / content managements software actually works. Git is little more that an old style delta diff file journaling system. So a 30 / 40 year old technology. Basically a slightly updated version of SCCS or CVS. With absolutely all the problems delta diff systems have. Passable for open source projects but a total mess with commercial project codebases when you try to use it as a proper version control system. So the only strategy is to fall back on the code management procedures that worked before version control software became dependable enough to use for big code base commercial projects in the mid 1990's. I know this approach has saved me a lot of wasted time when forced to work with git. Just like it did with CVS decades ago.

        Young 'uns, and their shiny new toys. Which are just tarnished old toys reinvented by the next generation and waved about like they are something new and innovative. Which they are n't.

        1. MJB7

          Re: Git

          The two big problems with CVS were:

          - Automatic merging didn't work well. Merge technologies have improved dramatically since then, and it is no longer really an issue.

          - An individual change was at the file level. There was no multi-file commit. Most systems since then have fixed that.

          I have used SourceSafe, Perforce, and git. SourceSafe was unreliable (and didn't have multi-file commits). Perforce was fine, and easy to use. git is fine, and rather more complex to use. I _think_ I prefer git, but that may just be Stockholm syndrome.

          1. Hope Spirals
            Pint

            Re: Git

            Up for -> Stockholm syndrome

      5. Is It Me

        Re: Nothing new...

        The other gem he created is SubSurface, a bit of dive logging software that imports from most dive computers so you aren't reliant on the crud that the manufactures knock up, and you don't get tied in to one make of computer if you want to keep all your logs together.

        He has mostly handed it over to others but still chips in occasionally.

  4. Yet Another Anonymous coward Silver badge

    This is why the system works

    A, you're a moron this should be faster, blah is always faster

    B, you're the moron, stop talking bullshit, this is a special case where blah isn't faster, I know what I'm doing

    A, erm ok I suppose, but normally blah is faster, we will do it your way...

    That is a much better outcome than everyone have to sign up to diversity codes of practice and singing kumbaya around every checkin

    1. Brian Miller

      Re: This is why the system works

      Or I can understand what would happen if Torvalds saw my coworker's code. Side effect from calling a search: log out.

      Seriously. I will have to explain to my non-coding manager why this is a bad thing, and things like this make my project contributions late.

    2. Anonymous Coward
      Anonymous Coward

      Re: This is why the system works

      Yep, people should disagree if they don't understand or actually disagree and then change their mind when it becomes apparent it's right.

      Think about it, someone at Microsoft put forward the idea of "Clippy" (and Windows ME/Vista) and no one in that meeting has the nerve to say "What the fuck are you smoking?"

    3. mevets

      Re: This is why the system works

      It is too bad there isn’t another choice between being a fuckwit and singing kumbaya. I don’t understand how mailing lists intrinsically create this dichotomy.

      If a third option were possible, it might be to insist that people read threads before following up on them. Certainly this limits the length of email threads, but might actually increase the quality of decisions and codes. The belicose fucktard model actively discourages people from taking part; why would I want to waste my time reiterating the conversation for someone who doesn’t bother reading it.

      Maybe a few years of President Homer ( and Prime Minister Barney?) we help people appreciate the model of passionately arguing from a position of understanding. Could Linux have its Obama moment?

  5. Anonymous Coward
    Anonymous Coward

    yes, well, but...

    Look, Mr Chinner is right. This is why concurrent I/O on large data sets is a specialized application- particularly when you *don't* want to actually cache most of it (i.e, readahead is a bad idea). Some of this could be fixed with applications using madvise better, or having finer grained tools than that.

    Linus, and a large number of Linux developers, have the point of view that "if it speeds up kernel compiles, it's the right idea". That's fine insofar as it goes, but doesn't spread to other workloads all that easily.

    Hardly surprising. Didja expect Linus not to be Linus?

    1. Jason Bloomberg Silver badge

      Re: yes, well, but...

      I think this is Linus's core problem; he won't easily accept that others may actually be right. And, when they persist in being 'totally wrong' as he sees things, he turns shouty-sweary and starts accusing quite capable people of being incompetent and worse, which merely escalates the disagreement.

      We have probably all been on both sides of that at times; not fully grasping what we're being told and not having others understand what we are saying. It happens. It seems Linus still isn't dealing with that very well.

      1. RM Myers

        Re: yes, well, but...

        The fact that this episode is even note worthy makes me believe he has made progress. It is never easy to make core changes to your behavior, and It's not surprising that he sometimes back slides.The real test will be how he behaves over the long term.

        1. Doctor Syntax Silver badge

          Re: yes, well, but...

          "The fact that this episode is even note worthy makes me believe he has made progress."

          You need to bear in mind that these sweary episodes make the news because they are not and were not BAU.

      2. TRT Silver badge

        Re: yes, well, but...

        I have to admit to having a similar set of arguments and swearing recently (not with Linus) regarding load balancing strategies and why Layer 7/Application Load Balancing isn't always the best way to do it for every network architecture. They got very defensive about things.

        I'm sorry to say I caved in, deferring to their expertise on the subject just so I could have a quiet life and get the project moving again. I ended up having to push my arm even further into the boss's pocket to find an extra £5k to buy their box deluxe instead of the medium one. Once they've done the production install, I'll set up a trial on a development side channel and see if I was right after all.

    2. NetBlackOps
      Holmes

      Re: yes, well, but...

      I agree that Chinner is right as my thoughts have already been there for a while. The whole OS architecture needs a re-think in light of modern hardware developments but I still don't see that happening. It's the old war of good enough against the amount of resources required to re-engineer the beast(s) for peak performance, which is basically what underlies what Linus is saying in the list.

      1. anoncow

        Re: yes, well, but...

        Maybe you think Chinner is right because you didn't read what he posted, or read it and didn't understand. I quote: "the page cache is still far, far slower than direct IO". No, wrong. Idiot.

        There may be specific cases were that isn't a bald faced lie, but in general it's just that: a bald faced like. Chinner. Just ignore.

        1. Doctor Syntax Silver badge

          Re: yes, well, but...

          Perhaps you didn't understand it. If your concern is to get an item of data onto disk ASAP then you don't achieve that by leaving sitting round in the cache until such time as an algorithm you don't control decides to put it on disk. If you can do direct I/O then it's faster. OTOH caching is going to be best for overall throughput but does leave cached data at risk for some time.

          If you're a good DBA you are paranoid about integrity and consistency of data. You assume there are things out there such as power failures ready to attack*. You don't like the idea that a page from this table might get written and a page from that not so that a join fails or returns incorrect results. You look for an RDBMS engine that can (a) do journalling so it can roll back incomplete transactions and roll forward those committed but not yet completely updated in the tables and indexes (b) coordinate its cache flushing with the journal, (c) coordinate all that with, if necessary, backing up the database whilst still continuing transactions and (d) have its parameters tuned to gain the optimum throughput. The parameters for that case are not necessarily going to be those suited for general purpose file system I/O, even a journalled file system; it's much more complicated case. In such cases the engine needs direct I/O so that the programmer can rely on data having been written when a write call returns.

          Even with ordinary file system I/O Linus had a wake-up call back in the days of ext 3, or maybe even ext 2. IIRC memory sizes had gone up and as the file system simply made use of otherwise unused memory as cache and it transpired that instead of being flushed frequently because of lack of space, caches were sitting around until the kernel synced them on the basis of time since the last flush. Realising the possible consequences of this - possibly some data actually got lost - he commented "what moron put that in there?" or something similar.

          I assumed that it was self-deprecating, the lesson having been learned. Ext 4, of course, being journalled, made things much more secure. But maybe not.

          * Of course you don't expect this to be an everyday occurrence. Indeed, you hope you never have to deal with it, but then any system administrator taking routine backups hopes never to have to use them to deal with anything other than a careless file deletion by a user.

          1. anoncow

            Re: yes, well, but...

            It's clear that you have only the most tenuous grasp of file system basics. Minimizing write latency is not the primary goal of a filesystem, rather, the goal is to minimize operation latency. That is why nearly all write IO on Linux is buffered, not fsync. Simple fact, which apparently escaped you.

            Now if you want to talk about synchronous IO, that is interesting. Just don't labor under the delusion that this is the most important aspect of file system design. And for the record, XFS and by extension Chinner, suck at sync IO.

        2. MonkeyCee

          Re: yes, well, but...

          But Chinner was talking about a particular use case which is, as I understand it, one of those specific cases.

          Rather than replying about this specific case, Linus (and yourself) took a quote about a specific case, clipped it, and replied like it was the general case.

          I appreciate that at least in your case, you acknowledged there are cases where it wouldn't apply. And I'm sure if you realised which use case is was about, you might not have gone straight for the ad hom.

          1. anoncow

            Re: yes, well, but...

            The reason that both Linus and I home in on that quote and heaped abuse upon the idiot who posted it is, it's an idiotic thing to say. Along the lines of "water is lighter than air". Yah, you can bore everyone to tears trotting out some case where water really is lighter than air, but oh please. Save that for when you really feel the need to be ignored at a party. Same goes for Chinner's unadulterated idiocy, followed up by wasting an enormous amount of bandwidth attempting to buttress that indefensible position.

            1. Maty

              Re: yes, well, but...

              "Yah, you can bore everyone to tears trotting out some case where water really is lighter than air, but oh please."

              Clouds?

              1. TRT Silver badge

                Re: yes, well, but...

                One specific instance: The Cloud.

            2. sabroni Silver badge

              Re: yes, well, but...

              Ooh, blind devotion. Stupid, but a little bit sweet too....

        3. R3sistance

          Re: yes, well, but...

          The idiot here is you, as the article has ALREADY explained, what Chinner said was in a context. What Linus did is called a Quote Mine, and you've REPEATED that quote mine. This Quote Mine was then used by Linus to construct a Strawman which Linus then attacked.

          All you've done is repeated a quote mine and shown yourself a real idiot here.

          "but in general it's just that: a bald faced like. Chinner. Just ignore."

          The Bald face lie is saying that this was a General statement, it was made IN A SPECIFIC CONTEXT.

    3. Anonymous Coward
      Anonymous Coward

      Re: yes, well, but...

      Having read through some of the comments, I believe the positions are:

      - Linus believes the page cache is the correct approach as it stands because legacy.

      - Chinner rightly argues that the page cache significantly impacts performance and has a number of legacy issues. As a result, faster file system operations are moving to DMA to avoid the page cache entirely. Which would appear to be the answer.

      Given the impact of page cache changes (in part due to the things that are currently broken with the current implementation), I suspect Linus's position is correct and sums up the issue with many of his previous rants where changes that break legacy functionality must be avoided at all costs.

      The DMA solution or equivalent "new" functionality that provides the required performance with concurrency while avoiding the legacy issues with the page cache would appear to be the solution in the short-to-medium term to address the niche requirement.

      Assuming Chinner is correct about the long-term trend away from I/O based on the page cache (as this is not a new issue in OS I/O design, I would suggest that this isn't a given as hardware and CPU changes may provide the performance via other means), the solution will be mature when it is required for mainstream release.

      1. fajensen

        Re: yes, well, but...

        .... changes that break legacy functionality must be avoided at all costs.

        Shhhh! Poettinger and Team Systemd will hear this and implement whatever it is they alone think that Chinner wants - then the Gnome crowd will make Everything depend on it!

  6. Greg 38

    Time for a pint

    Hey, I just saw an article about Linus going all blinking Linus on someone and I grabbed a bag of crisps and a pint to read it. I guess I'm a sucker for this sort of entertainment.

  7. swm

    When Smalltalk-76 came out on the ALTO computers it cached objects in its very limited memory and cleverly purged them in disk file order (to a diablo 31 disk). I wasn't aware of how much time this saved until I did "surgery" to change the microcode which purged everything in the cache. It took minutes for the system to resume normal operation.

    1. timrowledge

      Ah, those were the days...

      Now even a watch has more cpu and memory than (probably) all the Altos that ever existed.

      And a simple webpage uses more memory than that.

  8. Anonymous Coward
    Anonymous Coward

    cutting edge caching schemes

    I am totally qualified to pontificate about disk caching in the pub.

    On one hand SMARTDRV.EXE could give DOS and windows 3.11 quite a speedup for some operations, if you had spent all your wages on 16 megabytes of RAM.

    On the other hand, it's quite an annoyance when I want to unplug a USB2 drive but there are 30GB of writes cached in RAM that are going to take 15 minutes to write to the disk.

    1. Anonymous Coward
      Thumb Up

      Re: cutting edge caching schemes

      I've found Linux quicker and transfering files to external drives (USB/SSD/HDD), but I suspect that's the USB 3.0 drivers in Windows 10... as 7 works fine. :P

      But generally, a USB2 drive is a write speed limit, not a limit or slowness of the cache, right?

    2. Sorry, you cannot reuse an old handle.

      Re: cutting edge caching schemes

      That's why it took Microsoft 30 years to finally change Windows' default policy for USB connected drives from "Best performance" (ie write-caching) to "Quick removal" (ie not-write-caching), after zillions of users lost their data because of their inability to understand a simple instruction to "click on icon to disconnect device before pulling the plug"...

      https://support.microsoft.com/en-gb/help/4495263/windows-10-1809-change-in-default-removal-policy-for-external-media

      1. david 12 Silver badge

        Re: cutting edge caching schemes

        'Bulls%^t! Complete bull$h*t!'

        Even in WinXP, the default was that external disks were not cached, as had been the case since the days of floppies.

        When MS says "previous versions of Windows defaulted to Quick Removal", what they mean is "previous versions of Win10." Maybe even Win8.

        Actually, the bigger point is that MS never admits to anything good about previous versions. You will always see statements like "Version 10 of Microsoft Windows includes support for a fully graphical 'windows, icons, menus, pointer' User Interface." That does not mean that Win8 lacked that feature. It's just something to be aware of when reading MS feature lists.

        1. Jakester

          Re: cutting edge caching schemes

          And in the days of DOS/Win 3.1, if you had the disk caching software running, you had to flush the cache before shutting down the computer. I had experienced many known corrupted data files before Microsoft eventually made it know that you had to flush the cache and gave the instructions on how to do it. They eventually made it automatic in the shutdown process.

      2. Sorry, you cannot reuse an old handle.

        Re: cutting edge caching schemes

        Not entirely correct: write-caching was enabled by default on all USB devices in Windows 2000 and on USB storage hubs (like card readers) in Windows XP and following. I didn't go as far back as checking Windows 3.1, NT or even DOS but I'm sure someone will :-)

      3. This post has been deleted by its author

    3. Simon Harris

      Re: cutting edge caching schemes

      "On the other hand, it's quite an annoyance when I want to unplug a USB2 drive but there are 30GB of writes cached in RAM that are going to take 15 minutes to write to the disk."

      Although if you can afford enough RAM to give away 30GB to file caches, you can probably afford a USB3 or eSATA port and a fast external drive - you'll still have a few minutes to wait though ;-)

  9. Hstubbe

    There's far more code in the linux kernel *not* written by Linus, i don't get these linus fsnboy types who keep defending his psychotic behaviour that drives away other more brilliant contributors.

    1. Flocke Kroes Silver badge

      It is not that hard to understand

      Dave Chinner wasn't driven away. Brilliant contributors can explain why they are right and Linus is wrong. Mediocre contributors have a strong incentive to read, understand, think and test before contributing. Time wasters get a thorough scolding and hopefully do not waste other peoples' time again until they get a clue.

      Have you ever been on the receiving end of a frothing-at-the-mouth swearing rant screamed so loud that they can hear it in the car park and enjoyed every second of it because you know what is going to happen when you calmly explain why the ranter has entirely misunderstood what is happening?

      1. Anonymous Coward
        Anonymous Coward

        Re: It is not that hard to understand

        Oh, yes, and it's very, very enjoyable.

        Almost as good as a cool pint on a sunny day after a good day's worth of work.

      2. Anonymous Coward
        Anonymous Coward

        Re: It is not that hard to understand

        > Brilliant contributors can explain why they are right and Linus is wrong

        Or they can find a less stressful project to work on.

        Then all you’re left with are the mediocre ones.

      3. mevets

        Re: It is not that hard to understand

        "Have you ever been..." ? Yes, and playing tit-for-tat only extends the game; I have also left good jobs with (supposedly) desirable companies when I witnessed this sort of assholery being poured on others.

        I don't know if I am good enough to contribute to linux, but I have been a significant contributor to an OS that runs things like nukes and trains and radiation equipment among other things. I've never attempted to contribute to Linux because that cadre makes the gcc maintainers look reasonable. I know others like me.

        Linux isn't solient green; there are lots of better OSes. It does have the advantage of being "free and good enough" for many applications.

        1. Intractable Potsherd

          Re: It is not that hard to understand @mevets

          "there are lots of better OSes." What are these OSes, and are they generalist or specialist? Can anyone use them or are they for geeks only? Do they work on any hardware, or do they have restrictions? I'm asking because I'll quite happily move to an OS that is better than Linux, because Linux is great, in my opinion.

          [I'm not trolling, I am genuinely interested]

          1. dew3

            Re: It is not that hard to understand @mevets

            I cannot say I have worked much on OS code. But I have 30 years in databases, and I have worked on many OSes, and directly with many OS kernel developers.

            That said, I would not agree there are clearly lots of better OSes. I guess I could probably think of some better OSes if I wanted to, but you hit a great nail on the head - better at what? More to the point though, I will say most OSes I have worked on have some interesting individual features obviously better than Linux even if the whole OS isn't obviously better (*). But for reasons I cannot fathom, OS development seems to attract a lot of pretty extreme "not invented here" types, so useful features don't always migrate around like in other areas of software.

            (*) except maybe for HP-UX, which never seemed more than an unimpressive vanilla variant of system-V unix. Of course DEC's original ULTRIX was literally just BSD Unix with a handful of text lines changed, but they flushed that OS long ago, replacing it with the far better OSF/1, which in turn was killed off by HP...

      4. MrReynolds2U

        Re: It is not that hard to understand

        Except when dealing with Manglement that have a small understanding of systems. In those cases, they always think they are right.

    2. davcefai
      Thumb Up

      Linus is the man who keeps the project going. A brilliant coder but also a brilliant cat-herder. What other OS progressed as fast and as well as Linux?

      So he gets pissed off at some people. So what? His system(s) work and the OS keeps going strong.

      Linus doesn't need to write code. He does need to manage it. And he does, in a way most managers can envy.

      Mr Torvalds, in case you get to read this, consider your hand shaken.

      1. Warm Braw

        Linus is the man who keeps the project going

        I think that's more of a concern - having what is in effect a single point of failure in an otherwise large and distributed project. Having people who understand the big picture implications of sub-system changes is clearly important, but it would be good to be confident that there were enough of them to cope with unexpected bus-related fatalities. That doesn't just mean having the technical expertise, but getting to practise the cat-herding.

        1. Chronos
          Coat

          Having people who understand the big picture implications of sub-system changes is clearly important, but it would be good to be confident that there were enough of them to cope with unexpected bus-related fatalities.

          Get rid of the buses. Problem solved...

          1. anonymous boring coward Silver badge

            And then comes along a Google Waymo..

            1. Chronos

              One problem at a time, my friend. Besides, it's highly unlikely that Linus and Greg K-H will be riding bicycles simultaneously across an intersection while the "supervising" driver is fiddling with their phone. That's a specific set of circumstances that would be akin to trying to engineer, say, everyone ditching legacy hardware within the next decade rendering page caches obsolete...

      2. anoncow

        Nice thing about Linus, he understands caching. As opposed to Chinner, who is clearly challenged.

      3. oiseau
        Thumb Up

        Mr Torvalds, in case you get to read this, consider your hand shaken.

        Indeed ...

        +100

        O.

      4. sabroni Silver badge
        Coat

        re: Mr Torvalds, in case you get to read this, consider your hand shaken.

        arse licked more like!

    3. TheMeerkat

      And why do you think the kernel still works after all these developers added their code? If not for Linus being rude and dictatorial, the kernel would be dead by now due to all inconsistent features added.

      1. sabroni Silver badge

        If not for Linus being rude and dictatorial, the kernel would be dead by now

        So it's nothing to do with his coding ability? He's literally just a very shouty manager?

        1. hammarbtyp

          Re: If not for Linus being rude and dictatorial, the kernel would be dead by now

          the only reason he can shout and get away with it is because he is respected by his fellow coders. He has done the hard yards. Anyone with fewer qualifications would be ignored.

          More importantly (and often rare in mere "coders") is that he has a clear vision on where Linux should go. Often on a project that is critical otherwise everyone is throwing in their ideas and the code becomes a conflicting mass of Dingo poo. I would rather have a shouty manager who knows what he is talking about and is clear on the product priorities than a manger who believes that democracy and everyone getting on is more important than product quality

          To be honest these rants are thrown out of all proportion. There have been time when I have shouting matches with my colleagues to the point someone has to intervene and separate us. Normally we cool down, re-evaluate everyone's position and come to some sort of compromise or solution and just get on with life. Difference is I am not as high profile as Linus and I do not do it on a public forum. Not only that, but it can be easy to misconstrue a comment or email, especially is when cutting across cultural barriers.

          1. sabroni Silver badge
            Facepalm

            Re: There have been time when I have shouting matches with my colleagues....

            ....to the point someone has to intervene and separate us

            Oh, well you're definitely someone I'll be taking work behaviour advice from! I've never got into a shouting match with my colleagues.

            What am I doing wrong?

  10. arctic_haze
    Linux

    Good!

    So many things went wrong in the world recently. I'm glad that at least Linus did not change!

  11. Anonymous Coward
    Anonymous Coward

    This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

    I'm currently the founder of a startup called GlugGlugGLUG! and we're writing an app to allow people to track when the coffee at their favourite Artisan Brewhaus is at its best and share this via social. This means that others can visit the same house of bean at the right times for their perfect brew!

    We have a development team in Hungary, but rather than being pure developers, we actually hired engagement and diversity champions from diverse ethnic and self-structural backgrounds to sit next to, and often between the developers so they can mediate the idea meta-flow. An example: one of our developers was working on what they called the 'force quit' function. His engagement champion, said 'whoa! imagine that component was a woman - would you want to force her? Not cool. She has rights'. It took a while but the developer decided not to implement the function and indeed felt his do-profile would be a better fit elsewhere.

    Although this has held up our product by six months and we are in danger of being beaten to market by our rivals Frothaaaaaaah!, it's still the right thing do do. I sleep soundly. I sleep in bliss.

    Maybe Linus should adopt our model as well. I feel it could really structure his engagement patterns in a life-positive manner. He can reach out to me at founder@glugglugglugglug.co (because who doesn't love an extra glug of their favourite brew!)

    1. GrumpenKraut
      Devil

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      > ...and share this via social.

      For this half sentence alone: Aaaarrrrrrrrgh!!!

      You ---->

    2. Anonymous Coward
      Anonymous Coward

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      And that post is at the bottom of today's problems with polarisation, because the alternative to Linus not swearing at someone like that is not a load of new age touchy feely, it's him learning to act like a grown-up, or even Wolfgang Pauli, and master the firm but polite disagreement which is actually a more devastating putdown.

      I remember being told by someone that in wars even generals swear at one another, but then people are getting killed. In peacetime, they craft subtle minutes that undermine insidiously. Sense of proportion matters.

    3. Irongut
      Mushroom

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      "track when the coffee at their favourite Artisan Brewhaus..."

      Hipster fucking douchebag.

      1. Doctor Syntax Silver badge

        Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

        Whoosh.

    4. heyrick Silver badge

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      "the same house of bean at the right times for their perfect brew!"

      The ONLY hot beverage brew is tea. This namby pamby coffee rubbish with its endless faux Italian names served by "baristas" can bugger off for all eternity.

      1. Anonymous Coward
        Anonymous Coward

        Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

        Agreed, I want to be able to go into a coffee shop and buy a proper café au lait.

      2. MonkeyCee

        Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

        "This namby pamby coffee rubbish with its endless faux Italian names"

        Ah, there's your problem.

        As any fule kno, only proper coffee is made by Arabs and Greeks*. Three cups and you're so knurd that you're either coding directly in hex or creating your own religion. The only other group of people who come close are chemists**.

        Why every Kurdish or Syrian housewife can make a better cup of coffee than every so called barista in the Netherlands is another matter.

        * I'm dating a Colombian, I'm quite aware of just how many countries I'm insulting there :) but Italians aren't any better at making coffee the French or Germans.Certainly not better than any coffee producing country.

        ** the only thing I recognised on Breaking Bad was the coffee making setup.

    5. Doctor Syntax Silver badge

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      Nice one, A/C but maybe a little too subtle for some.

      1. Teiwaz

        Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

        Nice one, A/C but maybe a little too subtle for some.

        Any other site, the imaginary 'Release Armageddon' button would be getting a pounding...

        But even on the Reg, where I was 99.99% sure it was someones twisted idea of humour, my testicles were still shrivelling and my shard of the collective soul of humanity was withering unto death.

        And still the button got pounded.

        1. heyrick Silver badge

          Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

          "And still the button got pounded."

          Big red buttons just invite somebody to prod it, "to see if it works"...

        2. GrumpenKraut
          Meh

          Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

          > And still the button got pounded.

          Yes, sadly. I really like the comment I replied to (and up-voted it, just for the record).

    6. coderguy
      Pint

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      "

      It took a while but the developer decided not to implement the function and indeed felt his do-profile would be a better fit elsewhere.

      "

      For that you get one of these ---->

    7. Anonymous Coward
      Facepalm

      Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

      Ha haaa, at least ten people on this forum don't have a sense of humor!

      1. Anonymous Coward
        Anonymous Coward

        Re: This is why Diversity matters. Linus is wrong, but he can learn from my story. Here's how.

        Even funnier - I’ve just grabbed the domain name from them! Best £750 I’ve ever spent!

  12. chivo243 Silver badge
    Happy

    Glad I work on the demand side of IT

    the swearing and ranting echoing off the walls... I would enjoy the banter, but I doubt I could get any real work done.

  13. Doctor Syntax Silver badge

    Linus has always had some sort of opposition to direct IO. One symptom seems to be the absence of character devices for disk partitions - something which has been part of Unix for as long as I can remember (V7 days). There are ways round it but it's a strange fixation. I'm not sure why he failed to grasp that applications such as database engines rely on their own journalling schemes for their integrity and that when a journal write completes the programmer should be able to assume that the data is actually in storage and not lounging about in cache and vulnerable to power or other failure.

    1. Anonymous Coward
      Anonymous Coward

      Rumour has is that allegedly Larry Ellison had 'direct I/O' with one of Linus's many girlfriends. Although come to think of it, Larry as allegedly known for regularly 'flushing his cache' during these sessions too.

      Sorry.

      1. Snorlax Silver badge
        Trollface

        @AC

        "...one of Linus's many girlfriends"

        lol, you wish. Comedy gold...

  14. Matthew Taylor

    Here's to swearing

    When the West falls, I hope the niceness police will be held up as primary architects of its demise.

    1. Anonymous Coward
      Anonymous Coward

      Re: Here's to swearing

      And Japan? Politeness caused its demise? Toyotas are worse than GM cars? Canon and Nikon are failures while Kodak was a success?

      You do surprise me.

      1. Teiwaz

        Re: Here's to swearing

        And Japan?

        Don't confuse 'nice' with 'polite'... or respect.

      2. mithrenithil

        Re: Here's to swearing

        !!!

        Shouting isn't required because in many cases the subordinate is "socially" required to meet their boss's expectations. That includes the death march (crunch in western game dev terminology), or going out to the wee hours of the night socialising.

      3. fajensen
        Headmaster

        Re: Here's to swearing

        The Japanese, like the English, are mainly seen as "polite" because foreigners does not understand the language and especially grammar well enough to properly comprehend the insult!

        In Japan older people are allowed by social norms to hurl abuse at any person younger than them. And they do. Being old in Japan means that you don't have to give a shit anymore and the granny and gramps, they love it!

        In the work context, everyone has to suck up to their boss and only criticise very mildly and in his (because it is a He) general direction. However, part of work in Japan is also Drinking With the Boss.

        While one is 'drunk with colleagues and boss', the social rules change. Any invective and graphic description of the boss's many faults, deficiencies, adventures with seafood and farm animals as well as general failures become socially acceptable and it is unacceptable for the boss to retaliate later, he basically has to sit there and say: 'Thank You for you frank and honest observations. I will try hard to do better' and he will pay for the drink too! The social norm is that nobody remembers anything the next day, claiming to be 'too drunk'.

        So, what is it about Japan again?

  15. jmecher

    If you haven't had "sparks flying" moments while working on things, it's probable you haven't been doing useful things all along.

    Many of the times, after tempers cool, people realize it's nothing personal, and competing ideas are part of the game. Egos play a part here, and recognizing when you were wrong is a good trait to have - both for self preservation and for still having developers left at the end of the day.

    Unlike in a commercial setting, where if boss pulls rank you have basically no recourse, developers will walk away eventually, and that I think is ultimately keeping Linus in check..

    1. Irongut

      You always have the recourse of moving to another job, preferably leaving said boss up shit creek without you. It's a bit harder to find a different open source OS to work on.

      1. Fatman

        RE: the recourse of moving to another job

        <quote>... preferably leaving said boss up shit creek without you.</quote>

        Done that, more than once.

        The last time I did that, the look of the boss' face was:

        PRICELESS!!!! He screamed as I walked out: "You will be back."

        I had to go back about a month later, with a sheriff's deputy in tow.

        I had in my hand a court order for them to hand over my final paycheck, or the company's CEO was going to be arrested. I was tired of their 'obstructionist tactics'.

        Hey ElReg, where is that middle finger icon

        1. A.P. Veening Silver badge

          Re: RE: the recourse of moving to another job

          I had in my hand a court order for them to hand over my final paycheck, or the company's CEO was going to be arrested. I was tired of their 'obstructionist tactics'.

          I hope the amount of that final paycheck included punitive interest for their 'obstructionist tactics'.

          Here in the Netherlands pay is usually done by direct transfer, but if it is more than four working days late, punitive interest of 1% per working day can apply cumulatively until 50% is reached (after which it drops back to the legal interest rate, which is a couple of % per year). As a result, pay is nearly never late and if it is, the company is usually out of business within days.

          1. eionmac

            Re: RE: the recourse of moving to another job

            Thanks for this comment. I did not know situation in NL. I think this is needed in UK , and it should also apply to payments to company pension funds, which are often 'delayed' when companies are in tight financial situations. Now to lobby my MP (Member of UK Parliament).

      2. Anonymous Coward
        Anonymous Coward

        There's always Windows!

      3. jmecher

        >You always have the recourse of moving to another job

        If you do that every time you had an argument, you were either allways right (yeah), or are part of the problem.

        In my experience, most of the time friction comes from chasing different ideas (or different means of chasing ideas); not to say personal vendettas or such is not a thing, but the case the article is describing likely isn't.

        Also, Linus can't kick anybody out of linux development, they can keep doing what they do and getting paid while ignoring him. I wonder if this is the reason Linus gets mad sometimes...

  16. fredesmite
    Mushroom

    Sounds familiar anymore

    " Nobody can talk about <blah> without you screaming and tossing all your toys out of the crib. "

  17. Snorlax Silver badge
    WTF?

    Why?

    Why do we keep giving attention to this socially inept man-child? I’m surprised nobody’s punched him in the mouth yet.

    I guess some people find him inspiring, as they would a cult-leader like Jim Jones or David Koresh.

    1. hammarbtyp

      Re: Why?

      Same reason why there is still a cult of St Jobs (who also wasn't a particularly nice man). Because they were incredible important in the creation of the modern world.

      In truth nice people rarely have the drive and push to overcome the hurdles to really create a new industry.

      You don't have to like the guy to respect what he achieved.

  18. This post has been deleted by its author

  19. Potemkine! Silver badge

    Grumpy Ol' Man

    LT forgot to take his pills once again...

  20. Sherrie Ludwig
    Headmaster

    copy editor needed

    I was a copy editor, and still find it jarring to mentally "trip" over a mistake in professional writing. "Clam down" was one such moment, and these are increasingly common. Does no one other than the original writer read the articles before posting? Yeah. small gripe amid the world's troubles and all.

    1. Anonymous Coward
      Facepalm

      Re: copy editor needed

      Please check the use of capital letters in:

      1. Your comment title.

      2. In the sentence "small gripe amid..."

      Thank-you.

  21. drankinatty

    Idiosyncratic Favoriite Words -- A Leopard Can't Change its Spots

    Just ask Linus how he feels about C++... https://lwn.net/Articles/249460/

    1. fajensen
      Pint

      Re: Idiosyncratic Favoriite Words -- A Leopard Can't Change its Spots

      C++ has the same fundamental problems as Java.

      It is a Big Systems language created for (and by) Big Bureaucracies so Big Bureaucracy and How to Make It Even Bigger Forever permeate every aspect of it's design and use, right down to the need to be using Big Money Tools like 'Rational Realtime' smeared over with 'Rational Clearcase' to create and manage the code.

      Java put some extra spin on the whole bureaucracy thing by creating a huge library codebase for life-cycle management of binary modules and all manner of deployment - working like the experts believed it should be done in 1980. In real life so shitty and complex and brain-eroding to use that everyone avoids and throws everything in a container.

      If someone cleansed all of that guff, Java could may be redeemed.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon