back to article Linux kernel maintainers tear Paragon a new one after firm submits read-write NTFS driver in 27,000 lines of code

Paragon Software is trying to get its NTFS driver into the Linux kernel, but has submitted it as a single dump of 27,000 lines of code, sparking complaints that it is too large to review. NTFS is the default file system for Windows XP and later. Microsoft is beginning to replace it with ReFS for some scenarios, but NTFS …

  1. KorndogDev
    FAIL

    Why not in 0.5 file?

    Just scan the first 500 lines and count all goto's.

    1. Jon 37

      Re: Why not in 0.5 file?

      Certain uses of goto are a normal and accepted coding practise in the Linux kenel. They use return values and "goto cleanup" to get something like C++ exceptions.

      1. bombastic bob Silver badge
        Devil

        Re: Why not in 0.5 file?

        It also tends to generate more efficient code, if you don't gerrymander things just to avoid using a 'goto'.

        We're talking KERNEL code, here. It's not the same as some GUI "app".

      2. Kevin McMurtrie Silver badge

        Re: Why not in 0.5 file?

        Better yet, use C++ if you think you need a lot of C++ features in C. There's nothing like trying to work on C code that spends 40% of it's time pretending to be C++.

        1. Anonymous Coward
          Anonymous Coward

          Re: Why not in 0.5 file?

          The entire Linux kernel should be rewritten as a set of C++ templates. That'd be cool.

        2. Anonymous Coward
          Anonymous Coward

          Re: Why not in 0.5 file?

          "There's nothing like trying to work on C code that spends 40% of it's time pretending to be C++."

          Ha, how about C++ code that pretends to be C++ code? Pointers to member functions seemingly weren't C++ enough, so (this->*that[val])( foo ) becomes something similar to: [ this ] ( val ) -> return best_guess_lamda {}.

          As far as the article, 27k lines isn't that much, but if you present a 27k blob to be maintained.... so, I see it both ways and am on the fence.

      3. arctic_haze

        Re: Why not in 0.5 file?

        Maybe using goto is normal in the kernel but it does not change the fact that it makes reviewing the code an extremely unpleasant task.

  2. Androgynous Cupboard Silver badge

    Bit harsh

    While I get the problem, donating a read-write NTFS module to Linux is a good thing, isn't it? Paragon devs are not kernel devs, but that's hardly their fault. I suspect 20 years ago, someone would have said thanks, jumped on it and cleaned it up for inclusion.

    1. moonpunk

      Re: Bit harsh

      Couldn't agree more - like you say there would have been a time that the community would have been grateful and would have jumped on this to get it included.

      Happy days!!

      1. Anonymous Coward
        Anonymous Coward

        @moonpunk - Re: Bit harsh

        If the community would have been grateful and have jumped at this kind of code back in those days, Linux kernel would have been nowhere today.

    2. anothercynic Silver badge

      Re: Bit harsh

      Have to agree here... Although it may have been useful for Paragon to have engaged with whoever has contributed filesystem drivers to the kernel before and asked "how do we do this best?"

      That's how I got started contributing to open source efforts... Rather start off on a good note as a newbie than having your Nomex pants scorched.

      1. danno44

        Re: Bit harsh

        So Paragon should have their 'newbie' submission accepted because otherwise their feelings might get hurt, and they might not ever share their toys again? Boo-hoo!!!

        I guess when it comes to quality submissions and feelings, feelings win. Let the coddling begin!

        1. sabroni Silver badge

          Re: feelings win.

          The feeling being "I can't be bothered to help merge this, I enjoy moaning about MS stuff too much"?

        2. Anonymous Coward
          Anonymous Coward

          Re: Bit harsh

          There are ways of phrasing things. Instead of just going, "that's too long, lol wtf are you doing you noob", they could instead have said, "thanks for submitting that - it looks like something that'd be really useful. Unfortunately, it's a bit long and doesn't conform to our coding standards, which can be found here [link]. Can we have a chat to see how we can help you get the into a form that can be incorporated into the kernel?"

          1. amanfromMars 1 Silver badge

            Re: How to avoid nearly all of the harsh bits.

            There are ways of phrasing things. Instead of just going, "that's too long, lol wtf are you doing you noob", they could instead have said, "thanks for submitting that - it looks like something that'd be really useful. Unfortunately, it's a bit long and doesn't conform to our coding standards, which can be found here [link]. Can we have a chat to see how we can help you get the into a form that can be incorporated into the kernel?" .... Anonymous Coward

            Quite so, AC, and that is certainly another very good way to proceed whenever into the particular highlighting of peculiar progress and entangling success. Who/What wouldn't find it compellingly commendable and thoroughly recommendable too?

            1. Cynic_999

              Re: How to avoid nearly all of the harsh bits.

              Pleasethankmeformyeffortinwritingthispost. Iwillletyoucleanitupandresubmit.

              1. Borg.King

                Re: How to avoid nearly all of the harsh bits.

                // Pleasethankmeformyeffortinwritingthispost. Iwillletyoucleanitupandresubmit.

                // Thanks.

          2. andro

            Re: Bit harsh

            They did, see this response:

            https://lore.kernel.org/linux-fsdevel/20200815190642.GZ2026@twin.jikos.cz/

            And they have all been working together nicely, the patch and process is much improved, and things are looking good.

    3. Charlie Clark Silver badge

      Re: Bit harsh

      There are quite a few things to consider when you take on the maintainership of the code, which is basically what is happening and, before it gets to a code review you have to be sure that the legal aspects are covered: one of the reasons why there are so few NTFS drivers is because Microsoft has kept the file system essentially private.

      Then there is is the code review. File systems are not new and there might, by now, be a template or at least accepted best practice for their drivers. It's happened before that code dumps were received with "thanks, but no thanks" because there was more work involved in understanding the code than writing it from scratch.

      1. John Brown (no body) Silver badge

        Re: Bit harsh

        "because there was more work involved in understanding the code than writing it from scratch."

        Whilst I mainly agree with you, in this case, no one is going to write it from scratch and the existing NTFS driver has issues. Depending on the licencing terms of this code dump, maybe the devs/maintainers of the current NTFS can just use it top see how it works and "fix" the current implementation, although I suspect not.

    4. Lee D Silver badge

      Re: Bit harsh

      Not really.

      Let me give you an analogy.

      I've got an old banger of a car. It still works. So I leave it in your garden one day with a note saying it's a gift to you.

      The maintenance, updating, API-conversion and upheaval that happens in the Linux codebase is huge. So a dump-and-run is really an obligation, not unlike giving someone a pet dog for Christmas when you don't live with them.

      That maintenance burden is FAR FAR more difficult than the initial code-drop is. You have to understand all the code, change it often, make sure it stays secure and bug-free, deal with support from users who now have NTFS 5.1 and why are they getting corruption now when they didn't on 5.0, and so on.

      Literally, maintaining that code is harder than writing it in the first place. A dump-and-run is a common output from a company that doesn't WANT to maintain it any more, or deal with the user's complaining that it's unmaintained, or has a bug they haven't fixed.

      There were several NTFS projects, historically. Hell, even one that emulated enough of the programming interface to use the original Windows DLLs to access the filesystem under Linux. They've all suffered the same fate - they might "work" but nobody maintains them, so they get bugs and get obsolete and don't work for modern versions of the filesystem, so they end up being a dead-weight in the kernel.

      This isn't unique to NTFS, or even a particular subsystem of Linux - maintenance burden is the determining factor in a lot of the patch acceptance pipeline. Everyone involved has to understand it. It has to use the common code that it can (so NTFS doesn't have it's own special way of allocating a file or whatever that's different to everyone else). It has to work internally in a similar manner. Quirks have to be ironed out so they are clearly documented. Filesystem detection has to be consistent so it only offers to run disks that it knows it has the support for, and tied in. And so the filesystems maintainers have to be able to review it and not accept any NTFS-only nonsense.

      Because Paragon - if history is anything to go by - will not be patching this code in years to come. They'll be dead, gone, forgotten, unsupportive, the one guy who opened the code will move to another company or whatever. That's how *so many* filesystems, drivers, subsystems, etc. die in Linux. Neglect, and nobody able to understand it to take it over, because it's an "oddball".

      All the wifi drivers use the central 80211 framework, which was arrived at after dozens of independent and differing implementations that each vendor tried to put in just to run *their* card, and no other. The commonalities were pulled out, modularised, and then people were expected to conform or explain why they couldn't possibly do that. After decades, all the wifi drivers now use pretty much the same infrastructure even if they are radically different in capabilities, and those manufacturer's are long dead and gone.

      Code-dumps are the problem here. Dump-and-run is seen as a charitable action, but it's just an obligation on Linux kernel maintainers to justify, fix, debug, handle user support, etc. for it for decades. And if they can't understand it, or it does things it shouldn't do, then they can't do that, and they'll reject it.

      20 years ago people did exactly this kind of thing for NTFS. The result after all that time was one central common NTFS driver that's read-only. Because all those other contributors are long-gone and their code was something that "worked" for a brief period but was atrocious to integrate or debug.

      Maintenance is king. Especially in an era where security of something like NTFS, and data security of the user, could easily trash or compromise someone's system and people won't be able to understand how or where it came from. You do *NOT* want a code-dump running your filesystem with any kind of useful data. And you certainly don't want to be responsible for someone else's codedump when your users all blame YOU for their data trashing itself on hardware that you don't even have available to yourself and cannot debug on.

      This isn't a gift. It's certainly not a unique gift, it's happened dozens of times before. It's an obligation.

      1. oiseau
        Facepalm

        Re: Bit harsh

        ... isn't a gift.

        ...It's an obligation.

        Great write up. +1

        Summarised long ago by Virgil, ca. 29/19 BC in Aeneid (II, 49)

        Timeo Danaos et dona ferentes

        O.

        1. Paul Herber Silver badge

          Re: Bit harsh

          'Timeo Danaos et dona ferentes'

          Is that the Timeo Danaos who plays for Barca? I didn't know he was with that Dona Ferentes these days!

          1. Sanguma
            Happy

            Re: Bit harsh

            'Timeo Danaos et dona ferentes'

            The timid Danes, fearing even their Donnas! :)

            1. John Brown (no body) Silver badge

              Re: Bit harsh

              I never realised the Danes made such fearsome kebabs!!

        2. Anonymous Coward
          Anonymous Coward

          Re: Bit harsh

          Except.... Linux is falling behind when it comes to file systems. ZFS is now highly desired by many users, yet Linux (through no particular fault of the devs or Sun - it's a consequence of history) has no easy way to accomodate it. Now someone is offering a decent NTFS implementation and the initial response is, "thanks, but no thanks". This "it's too big for us to accomodate" for NTFS response would also apply to ZFS if that were to magically be donated under a GPL license.

          So current criticism of Oracle sounds somewhat hollow; it'd be spurned anyway, even if it was offered with a GPL license.

          The response perhaps ought to be "thank you - hang a mo", followed by a call to arms to raise enough dev effort to properly absorb something this big. Otherwise Linux will forever be rejecting large donations that, really, it ought to be able to accept.

          1. Anonymous Coward
            Anonymous Coward

            Re: Bit harsh

            ZFS is actually quite usable these days - I have a couple boxes running it on root via ZoL.

            1. Anonymous Coward
              Anonymous Coward

              Re: Bit harsh

              Yes I know that ZFS is perfectly usable and very good - it's just not in the kernel mainstream project.

              1. Alan Brown Silver badge

                Re: Bit harsh

                > I know that ZFS is perfectly usable and very good - it's just not in the kernel mainstream project.

                ZFS isn't in kernel mainstream because certain Sun developers specifically DID NOT WANT it to be (apparently there were resignation threats) and the CDDL was deliberately written to ensure llegal incompatiibility

                They work together quite happily, but they can't be distributed together as a working system (dead easy to workaround, just like MS fonts and various other non-GPL bits)

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Bit harsh

                  ZFS isn't in kernel mainstream because certain Sun developers specifically DID NOT WANT it to be (apparently there were resignation threats) and the CDDL was deliberately written to ensure llegal incompatiibility

                  You can't complain about that. It was their code, it was up to them to release it how they wanted. And no, they don't always work together nicely; there's certain people in the Linux kernel community who are keen on making certain symbols unavailable to the ZFS modulues.

          2. BinkyTheHorse
            Stop

            Re: Bit harsh

            This is a megapatch not for "Linux" (i.e. the entire ecosystem), but for the Linux kernel.

            As another commentard already noted below, there are non-kernel solutions for handling NTFS writes which already work quite well, and have been for many years.

            Why should there be a "call to arms" if the prospective contributor has put (charitably assuming) minimal effort into preparing the contribution to be usable, and better alternatives already exist?

            (also, I'm pretty sure there's kernel-level support for ZFS nowadays)

            1. Anonymous Coward
              Anonymous Coward

              Re: Bit harsh

              There is no official kernel level support for ZFS. There is an separate project, from which end users are free to download and install kernel modules to their heart's content, but the Linux kernel doesn't want to, and indeed can't have anything to do with it, due to license incompatibilities.

              Why should there be a "call to arms" if the prospective contributor has put (charitably assuming) minimal effort into preparing the contribution to be usable, and better alternatives already exist?

              Because it's a gift. So far as I can tell the prospective contributor has nothing to lose if Linux declines to incorporate the patch into the kernel mainstream. Why should they bend over backwards to please the Linux kernel community? Ever heard the phrase, "Never look a gift horse in the mouth?". Better stuff for NTFS may exist - for varying definitions of "better", but that's not the point (and any FUSE based solution isn't exactly going to be very performant).

              Linus has already raised concerns about having enough manpower in the Linux kernel project, and it's easy to understand why; everyone needs to make a living, and the munificense of those companies who donate labour can be stretched only so far. If no one is willing to stand up and do extra work, Linux will end up missing out on things.

              1. bombastic bob Silver badge
                Devil

                Re: Bit harsh

                I would rather that they do a FUSE-based solution, and THEN work on the fuse kernel drivers so that they're as efficient as possible.

                FUSE-based solutions would ALSO work on FreeBSD...

                1. Jamie Jones Silver badge

                  Re: Bit harsh

                  I admit I don't use it much, but what's wrong with /usr/ports/sysutils/fusefs-ntfs ?

                  "TFS-3G is a stable, full-featured, read-write NTFS driver for Linux, Android, Mac OS X, FreeBSD, NetBSD, OpenSolaris, QNX, Haiku, and other operating systems. It provides safe handling of the Windows XP, Windows Server 2003, Windows 2000, Windows Vista, Windows Server 2008, Windows 7, Windows 8 and Windows 10 NTFS file systems."

                2. Maelstorm Bronze badge

                  Re: Bit harsh

                  FUSE-based solutions would ALSO work on FreeBSD...

                  FUSE already exists for FreeBSD in the ports collection. Besides, Sun Microsystems contributed ZFS to FreeBSD, so FreeBSD natively supports ZFS.

                  1. Microchip

                    Re: Bit harsh

                    Don't think it was contributed, but the CDDL is compatible with the BSD licenses, so people were free to port the code as they wished from OpenSolaris and include it in the BSD distributions.

                    1. Anonymous Coward
                      Anonymous Coward

                      @Microchip - Re: Bit harsh

                      Exactly! CDDL was designed specifically to target Linux. BSD was not a threat for them.

                3. Jaybus

                  Re: Bit harsh

                  Well, there is already ntfs-3g, which is FUSE based. Why should they work on yet another? The problem with that is the huge performance hit, especially with lots of small writes, that is directly related to it using FUSE. A kernel driver for NTFS is not a ridiculous concept.

                  Oh, and btw, ntfs-3g has > 29k lines of code, so the existing FUSE-based FS is actually slightly larger than the proposed kernel driver. For comparison, ext4 has 29k and xfs has 65k. So wtf?

              2. Alan Brown Silver badge

                Re: Bit harsh

                "If no one is willing to stand up and do extra work, Linux will end up missing out on things."

                Indeed. If Paragon were truely committed to this, they'd step up with the manpower to document the hell out of everything and rewrite this to comply with standard formats

                Otherwise it's just a publicity stunt for abandonware

                1. Anonymous Coward
                  Anonymous Coward

                  Re: Bit harsh

                  "Indeed. If Paragon were truely committed to this, they'd step up with the manpower to document the hell out of everything and rewrite this to comply with standard formats"

                  Forget Paragon / NTFS. What would the Linux kernel community do with 27,000 lines of code they didn't like the style of and also required a lot of understanding but represented something they really, really wanted to incorporate into their kernel? Say it was Nvidia offering their driver code base on a take-it-or-leave-it basis?

                  Paragon seem to be copping some bad publicity from this - so how is that supposed to encourage other companies to donate code?

            2. doublelayer Silver badge

              Re: Bit harsh

              "Why should there be a "call to arms" if the prospective contributor has put (charitably assuming) minimal effort into preparing the contribution to be usable, and better alternatives already exist?"

              Do better alternatives exist? Ones that run quickly and support a lot of functionality? Ones that don't require complex installation or configuration for distro devs, one of the main reasons short of performance that stuff gets put into the kernel? I haven't seen that.

              A call to arms doesn't mean that everyone drops what they're doing and starts focusing only on this. It could be much more basic than that. For example, a call to arms among a couple of people who could read some of this code and give advice to the original developers about how to modify it to more easily connect with the kernel and get through a review. Unless the code is worthless, it would seem like there might be some benefits in doing something like that, especially as the original developer claims they're planning to continue to support it. They might be liars, but if they didn't care at all, they wouldn't have donated this in the first place. Maybe it's worth believing them and allowing them to prove it by helping them to make it kernel-worthy. If they don't bother, we're only a few emails down.

              1. Anonymous Coward
                Anonymous Coward

                @doublelayer - Re: Bit harsh

                Haven't you heard of poisoned gifts ? You don't approach the Linux kernel developers team by dumping code at their door and calling the press to tell the world about your charitable act.

          3. Snake Silver badge

            Re: AC, "Thank you - hang a mo"

            I agree. OK, we certainly can understand when the project is initially too large to grab a hold of, but is what Linux devs are saying, in equivalent, "In something large, unless provided as a completely plug-in solution we're not really interested"??

            Heck of a way to grow a community-based development.

            Yes, ideally we would love to see corporate submissions donated with every "I" dotted and "T" crossed. But corporations aren't required, nay really aren't even in the business, of providing the additional manpower to polish a codebase for guaranteed compatibility prior to submitting said codebase free, to a non-profit community-based venture. It's look a bit of the gift horse in the mouth, strictly IMHO.

            Maybe the devs are just communicating their shock (and a bit of horror) of having to review something much larger than their typical codebase. After regrouping they may come about and embrace the submission and be pleased with it.

          4. imdatsolak

            Re: Bit harsh

            But this is exactly what they said. When you read the mailing list, there were efforts such as “let’s break this into multiple steps, so that we can integrate it” OR “please put me on CC when you repost, so I can see how I can help”, etc.

            The kernel devs actually offered help and showed ways how this can be integrated by working together...

            No, I’m not a kernel dev but truth must be said...

          5. danno44

            Re: Bit harsh

            Linux is falling behind in filesystems? Even if that is true, the answer is to accept anything thrown at it's feet, even if what's thrown would crush their feet? I had no idea Linux was so desperate.

            Maybe the devs should stand out on the street with a sign that reads, "Will work for yesterday's filesystem."

            1. amanfromMars 1 Silver badge

              Re: Bit harsh

              Linux is falling behind in filesystems? Even if that is true, the answer is to accept anything thrown at it's feet, even if what's thrown would crush their feet? I had no idea Linux was so desperate.

              Maybe the devs should stand out on the street with a sign that reads, "Will work for yesterday's filesystem." ..... danno44

              That's a tad harsh, danno44. If you can read between the lines of the submission, is it not really seeking out suitable workers for future filesystems with advanced safe and secure systems of remote operation?

            2. Anonymous Coward
              Anonymous Coward

              Re: Bit harsh

              Name the last FS added to the mainstream Linux kernel.

              Effectively it's stuck on ext4, and for some distros its on xfs. Btrfs is becoming a yesterday's fs, and never actually worked properly in the first place. The working filesystems in Linux are dinosaurs in comparison to something like ZFS, there seems little prospect of it catching up (e.g. by fixing BTRFS), because it looks like there aren't enough people who care to put in the hours.

              1. hanshenrik

                Re: Bit harsh

                > Name the last FS added to the mainstream Linux kernel.

                Soon the answer will be "bcachefs" :)

          6. Anonymous Coward
            Anonymous Coward

            Re: Bit harsh

            Yes, it's all about priorities, and speaking of priorities, more importantly is to change terminology from Master - Slave or blacklist, etc..., instead of doing a review what is an excellent driver made by Paragon...

            This Linux community is becoming a true clown world.

        3. this

          Re: Bit harsh

          Tr: Beware of geeks bearing gifts

          1. Anonymous Coward
            Anonymous Coward

            Re: Bit harsh

            particularly wooden equine monuments

          2. Mage Silver badge
            Coat

            Re: Bit harsh

            Beware of Geeks bearing Gifs

            There might be a vulnerability in ImageMagick?

        4. JacobZ

          Re: Bit harsh

          "Beware of geeks bearing gifts"?

        5. hodgesrm

          Re: Bit harsh

          > Timeo Danaos et dona ferentes.

          Indeed. That was the well-known security engineer @Laocoon describing the first known Trojan Horse exploit.

      2. Leathery Hawkeye
        Thumb Up

        Re: Bit harsh

        Great analogy and comprehensive post that explains the difficulty here

        1. GrumpenKraut
          Thumb Up

          Re: Bit harsh

          Even a *car* analogy!

      3. Mage Silver badge
        Happy

        Re: Bit harsh

        Obligatory XKCD, this week!

        1. d3bug
          Pint

          Re: Bit harsh

          They really have covered everything haven't they?

          Have one on me.

      4. amanfromMars 1 Silver badge

        Re: Bit harsh @Lee D

        Crikey! ....... that is describing it as a Right Royal Crown of an Almost Poisonous Chalice, Lee D.

        And probably hoped to be thought designed that way to discourage and/or divert the less than wholly worthy able to lead in new directions with fresh virgin instructions/novel pretty untainted intelligence experiencing and documenting future information for mass multi media mogulling machine presentation ..... Live Virtual Introduction to Sound and Vision Mentors and Monitors of Earthly Creations for Humanised Populations ..... which in the much bigger picture schema of things are just as another one of those Newer Fangled Entangled Alien Civilisations.

        Nevertheless, that is what Future Machines have done and do for you. Would you have them stop and do something else too? What would that be, and for whom and for what and why, and when and where?

        Pray tell and let us all consider if it be truly worthy of the tasks to be undertaken and invested in to guarantee unlimited success to excess .... which I'm sure all can imagine in Truth is Also Nothing Less than Fablessly EMPowering ...... thus the Default Advisory Note Daring One to Not Care and Suffer Dire Consequences ...... which of course is the Fool Dunce Root of Litter Runts and Stunted Minds.

    5. Zippy´s Sausage Factory

      Re: Bit harsh

      I wonder if Paragon's thinking was "we're not sure what we actually need to do with this, let's just submit and they'll give us some pointers as to how we need to sort it out".

      If that was the case, then I'm sure it's all going according to plan. However, I suspect they were just being lazy and hoping someone else would just do all the heavy lifting.

    6. Anonymous Coward
      Anonymous Coward

      @Androgynous Cupboard - Re: Bit harsh

      Ladies and gentlemen, you pilot today is not an airline pilot but it's hardly his fault.

      That's how I read it.

    7. Blackjack Silver badge

      Re: Bit harsh

      The problem is that while not on the kernel, Linux already has quite decent support of NTFS. Linux Mint might need you to install something extra but I barely have any problems with their NTFS driver that's not on the kernel.

      1. Anonymous Coward
        Anonymous Coward

        Re: Bit harsh

        Just did a quick DuckDuckGo - looks like Linux access to NTFS was available in 2007 via ntfs-3g. Seems like my old WinXP drive was NTFS, which worked just fine under Ubuntu 14.04. Really asking, not trolling - what's the big deal about a new NTFS driver?

        1. A.P. Veening Silver badge

          Re: Bit harsh

          Nothing new about an NTFS driver, but dropping a large section of code into the kernel is a big deal.

    8. danno44

      Re: Bit harsh

      20 years ago it would've been a harsher reply. It's not like Linux is starved for attention, then or now.

      "Paragon devs are not kernel devs" - that's the reason they shouldn't submit code for inclusion in the kernel in the first place. Maybe we should all just drop code at this doorstep and get indignant and make excuses when we're not effusively praised. Not.

      Has any major company expressed that Linux will be ripped out if NTFS isn't added into the kernel soon? Have even YOU been thinking that before today?

      It's one thing to ask if there may be interest and start a discussion that leads to an eventual thoughtful, reviewable contribution. But this is akin to older parents moving out of their home into a retirement facility and saying to their adult children, "Hey you can have everything that was in our home - also, it's already sitting in your driveway. I didn't hear a thank-you!!!!"

      1. Cederic Silver badge

        Re: Bit harsh

        Nobody's a kernel dev until they submit code for inclusion in the kernel. Everybody has to start somewhere, and Paragon at least started with working code that does something new (to the kernel).

        1. A.P. Veening Silver badge

          Re: Bit harsh

          How can it be working if it doesn't even compile?

    9. don't you hate it when you lose your account

      Re: Bit harsh

      Can't be worse than Microsoft ntfs code. Click a folder and have a family while you wait for the contents.

      1. Andy Non Silver badge

        Re: Bit harsh

        Going back a few years now, but I remember when Microsoft introduced automatic expansion of Zip files into Windows Explorer. As I used to keep a number of very large zip archives the feature effectively crashed Windows when opening a folder that happened to include one of those files. CTRL-ALT-DEL reboot time. "It's a feature I tell you, not a bug".

    10. Anonymous Coward
      Anonymous Coward

      Re: Bit harsh

      They should just shove it in as a systemd module; that's what everyone else does.

    11. Cynic_999

      Re: Bit harsh

      No, NOT a good idea to include unvetted code in a kernel just because it does something people want. Not unless you don't mind having an OS that suffers bugs, unexplained crashes and occasionally trashes your HDD. As for "cleaning it up" - as a programmer myself, I have found that it is generally far quicker to write code from scratch than it is to check & clean someone else's badly written code.

  3. karlkarl Silver badge

    "I suspect 20 years ago, someone would have said thanks, jumped on it and cleaned it up for inclusion."

    In some ways it is nice to see Linux not quite so reliant on handouts. They can focus on quality and maintenance. This is good evidence that the project is reaching critical mass and one day proprietary companies will be unlikely to compete.

    I also see Linux as something that shouldn't become a "code dump". With companies feeling that their end-of-life projects that cannot be monetised can just be maintained for free as part of Linux. If paragon did commit to contributing their code earlier, perhaps they would have been welcomed a little more. I see this quite a lot with embedded obsolete ARM hardware that is no longer sold. It bloats out the kernel without benefiting over 99.9999% of users.

    But at the same time, the kernel developers should jump on this one. For too long we have been under the tyranny of Microsofts NTFS and fuse has never really been up to speed in the enterprise space. Also it kinda takes the wind from Microsoft's sails if they ever tried to contribute their driver to Linux for control and publicity (and EEE of course).

    1. anothercynic Silver badge

      I don't think Paragon is 'dumping' their code... but I might be mistaken. As said elsewhere in this post thread, a bit of pre-submission engagement could've prevented the horrible flamethrower-at-your-netherparts moment. :-)

      1. Lee D Silver badge

        It didn't compile. People had to patch the Makefile.

        It has out-of-bounds accesses. People had to run it through static analyzers to spot them. That's a potential "trash everyone's data" right there. It's not compliant with any of the kernel policies.

        Also, generally, it would be a pull request of a well-maintained and reviewed tree, not a huge patch on a mailing list. You'd ask for it to be put into -staging or even ask for sign-off first, not just dump it in the mailing list.

        They've done everything wrong so far. It's a dump. Or they could have just sent an email saying "We have X, what's the proper path to get that into the kernel?" and be put through to the right people, stage it in their trees for a year or two, iron out the bugs, etc.

        Instead they throw a mega-patch onto a mailing list, which didn't build and had bugs visible in seconds in it, and then crowed about how expert they were in doing this. There's also NO TESTS of the filesystem - it doesn't use all the existing kernel filesystem testing procedures.

        That's not the way to make friends in the kernel community.

        1. anothercynic Silver badge

          Like I said, Lee, they could've done this better.

          But not even compiling... that's not nice. I know another vendor who has form with that... Apple. Download any of their open source source tar balls and try to make them compile (on Mac)... good luck!

          *sigh*

        2. Soruk

          About 10 years ago I used the Paragon driver on evaluation for a project I was working on for my then employer. The USB disc I was using to test it with happened to have a bad sector, and I got quite the surprise when the NTFS driver triggered a kernel panic when it hit it. It wasn't so much that it was faster, they wanted a driver that had commercial support. However, when we raised the issue of a kernel panic on a bad sector we were completely ignored. We weren't a small company either, only a multi-national that has graced these pages several times in the past. I successfully steered the powers that be in the direction of the NTFS-3G drivers that were not only free, but far more stable.

        3. Nick Ryan Silver badge

          "We have X, what's the proper path to get that into the kernel?"

          This. This is the step that should have been addressed first. If the organisation was closing in a matter of hours and they were in an extreme rush to ensure that the source would not be lost then dump and run would be acceptable, with an explanation, but not otherwise.

      2. Doctor Syntax Silver badge

        "let me build our half of the bridge" doesn't sound much like a flame-thrower approach.

        1. anothercynic Silver badge
          Devil

          What, you didn't read the subtext? ;-)

          1. John Brown (no body) Silver badge
            Flame

            Yaeh, of course. Bridges? BURN THEM!!!

        2. Anonymous Coward
          Anonymous Coward

          "let me build our half of the bridge"

          I think they wanted to supply some of the middle section and have other people do the end bits.

    2. ComputerSays_noAbsolutelyNo Silver badge
      Linux

      Linux is too big and too relevant for the maintainers to accept code from everywhere.

      When it was Linus coding away, such a donation would probably have been welcomed. However, when half the world runs on Linux, there's little room for experimentation.

      After all, we accept crappy vehicles on our sidewalks (hoverboards that burst into flames), yet we highly regulate what flies above our heads.

    3. Ilgaz

      Paragon actually sells that code to large companies including Android vendors for great sums of money. Nobody is dumping anything.

      Let's say you have a render farm based on Linux and all your artists use Mac or Windows. You need something guaranteed to work with considerable support. You go to paragon like companies.

      1. Flocke Kroes Silver badge

        Re: You go to paragon like companies

        Only if you are utterly fearless.

        For 'no test suite', read 'any change could break things and no-one would know until after your data was trashed'. For 'out-of-bounds accesses' read 'your data was being slowly trashed and no-one noticed until the source code appeared on the LKML'. There are closed source vendors with quality products. Some of them may not take advantage of technological lock-in for a year or two. Paragon does not appear to be one of them.

        I will stick with my 'no source code' == 'no sale' policy thank you.

      2. the hatter

        Your renderfarm doesn't need to read physical NTFS disks, you're not sharing physical disks between artists and the servers, you just need a common network FS between the farm and the artists, or even different network FSs for each side, served from the storage controllers on the network, unless you're living in some very weird microsoft-focussed company. As mentioned in the article, NTFS is going to hit a downward trend soon, paragon had all the years to share the code for their commercial macos and other products with the OSS community it would hopefully have been a positive 2-way exchange of knowledge and updates.

        Instead, it feels like they want to let the OSS community take a look now and make fixes while their legacy customers still want fixes. Are they doing the same with their ReFS implementation, something new and of potential interest to the linux community ? Doesn't look like it, the new toys are just for paragon's eyes.

      3. Alan Brown Silver badge

        "Paragon actually sells that code to large companies including Android vendors for great sums of money. "

        GIven the code quality, that should be even more worrying

      4. John Brown (no body) Silver badge

        "Let's say you have a render farm based on Linux and all your artists use Mac or Windows. You need something guaranteed to work with considerable support. You go to paragon like companies."

        Or you could just have a few different machines with different OS on a network.

    4. DCFusor

      Yeah, but...

      20 years ago real NTFS support would have been worth a ton more, and perhaps justified a lot of effort to add it. Now it's borderline obsolete...

      1. bombastic bob Silver badge
        Meh

        re-thinking the priorities of including this NTFS driver in the base kernel

        NTFS support has its use, especially when writing disk imaging and recovery utilities that boot Linux from a CD/DVD [as one example].

        Seriously, I think THIS is the best use of NTFS support in Linux. For this kind of use case, a FUSE driver would be just fine. If you're thinking of actually USING an NTFS volume between OSs for a multi-boot system, for anything other than "simple data file interchange", it might be time to re-think your priorities.

        And the worst case: a high performance Linux-based server system that actually READS AND STORES DATA using NTFS. Seriously, why would anyone be using *THAT* INSTEAD of ZFS or even EXT4??? And so, the USE CASE for a "high performance kernel-based NTFS driver" seems very very limited to me...

        It might be time for re-examining the use case, and though "thank you very much" for the contribution, if I were to contribute 25,000 lines of Linux kernel code to support a PDP VAX or HP 3000 minicomputer, I suspect that it would not actually make it into the official distribution... requiring SO much review effort and testing to maintain reliability standards, nobody would touch it.

        but... as a FUSE driver, I'd gladly welcome a really good NTFS file system driver as an add-on package! I think the major distro maintainers would, too. And a FUSE driver should work with FreeBSD (and the other BSD's) and so on.

        /me points out that there are several contributed kernel drivers that must be built from source, and you could always ship it THIS way, as a source package using something like Debian's "module-assistant" to build and install it. [all of the debian-based distros should have this]

        1. Roland6 Silver badge

          Re: re-thinking the priorities of including this NTFS driver in the base kernel

          And the worst case: a high performance Linux-based server system that actually READS AND STORES DATA using NTFS. Seriously, why would anyone be using *THAT* INSTEAD of ZFS or even EXT4???

          Well given the general direction of travel...

          Microsoft Windows (Linux) - it would allow an install over a pre-existing WIndows install....

          Which raises the question of whether the (existing) Linux community should be enabling this to happen or whether it is something that should be left to MS...

        2. John Brown (no body) Silver badge

          Re: re-thinking the priorities of including this NTFS driver in the base kernel

          "but... as a FUSE driver, I'd gladly welcome a really good NTFS file system driver as an add-on package! I think the major distro maintainers would, too. And a FUSE driver should work with FreeBSD (and the other BSD's) and so on."

          My only real use case is if/when I need to format or access larger pendrives or USB hard drives that are too big for FAT32. The FUSE ntfs-3g driver on FreeBSD has worked without exception for me so far. I believe it's also available on Linux too. No one yet seems to have posted an argument for actually having NTFS support in the kernel.

        3. 12Sided3DObject

          Re: re-thinking the priorities of including this NTFS driver in the base kernel

          ext4 vs NTFS is almost embarrassingly in NTFS’ favor, honestly.

          A few key places where NTFS beats the crap out of ext4:

          - copy on write. NTFS is copy on write if you don’t disable the volume shadow service. Ext4 is not. Data integrity is naturally more assured with NTFS for this reason.

          - ext4 has whatever inodes you gave it at creation time. Didn’t make enough? Too bad. Reformat. And guess better, next time. This applies even if using extents, because you still can only have as many files as you have inodes, period. NTFS uses a growable table of inodes referring to extents, meaning it can’t run out of inodes until you’ve exhausted the pointer size for that table, and that’s 4.3 billion on a single file system. Several other file systems don’t have the limitation ext4 does. That limitation makes it an awful choice for a busy mail server, for example. Inode exhaustion SUCKS when only 30% of your drive is full... both have the same absolute maximum number of files per FS, which is 4.3 billion. But NTFS is dynamic. Ext is fixed.

          - Snapshots (windows calls them “shadow copies”). NTFS has the capability baked in, made easily accessible via the volume shadow service. If you’re using LVM or something, you can get that functionality with ext4 (or any FS really), but it’s another layer (albeit a light one).

          - deduplication. NTFS supports it, starting with windows 2000, and enhanced to be more similar to the way ZFS does it (“chunk” based in NTFS), from server 2012. Ext* has no such facility.

          - compression. NTFS has had it natively for 25 years. Ext* does not, to this day.

          - Native, per-file encryption based on certificates. NTFS has this, and it’s great. Ext4 does not, and you have to use something like LUKS to get anything even remotely close to it. But that’s more similar to bitlocker.

          - USN journaling. This, among the other reasons it exists, makes backups of NTFS a lot quicker than most other file systems, because a quick scan of the USN journal will tell you what has changed, when it changed, and In what way it changed, rather than having to scan all inodes involved in a backup. Much smarter than simple attribute or time-based backup.

          - indexed and deduplicated ACLs. NTFS maintains ACLs in a special file for each volume, making access enumeration significantly faster and more efficient than almost any other FS, including ext*.

          - extremely granular ACLs. Cool, but not strictly necessary for the majority of scenarios, for which POSIX ACLs are sufficient. For example, you can give permission to delete a file, but not otherwise modify or even read it. Or you might have permission to create a new file, but not to modify it once it is created. Or perhaps modify but not create or delete. Or perhaps append but not otherwise create, modify, read, or delete? You can’t do that with posix ACLs. RWX just doesn’t cover everything. And yes, there are legitimate and valuable applications for each of those scenarios I listed. You need extensions like richacls to achieve those capabilities with almost any other FS out there, including ext*.

          There are tons of little things NTFS does that ext* just...doesn’t...

          And the fragmentation thing someone brought up earlier is silly for two reasons. NTFS has also intentionally avoided fragmentation for the better part of 25 years, and volumes do not easily become fragmented unless you have usage patterns that would cause it. An ext* volume will become fragmented, too, in that case. Second reason is solid state drives. Honestly, fragmentation is not a legitimate problem to care about, any more. The only remaining legitimate reason to care about fragmentation is having huge drives. And if you’re using ext4 on a 12Tb spinning rust drive, it’s your own damn fault when you have problems Or data loss down the line. NTFS, BTRFS, or ZFS are FAR better options for large magnetic disks than ext4.

          And it had the majority of these capabilities before even ext3 was a thing.

          Nobody should be praising ext. It’s old. It’s crusty. It’s stable, I guess. But it’s just... primitive AF.

          Putting it in the same class as NTFS is just laughable, when modern NTFS is a lot closer to ZFS and BTRFS, minus volume management utilities being built in.

  4. Binraider Silver badge

    Paragon serves its self interest by contributing a minimally functional yet bloated driver. I'll stick to existing NTFS support thanks.

  5. TrevorH

    There has been read/write support for NTFS in linux for years. It's a FUSE based filesystem sure but it works.

  6. Ilgaz

    Culture clash

    I follow paragon for years and they did Windows community great favours which also served Linux dual booters.

    Thing is, they come from a different culture, development style. I am glad kernel developers didn't go too much harsh at them.

  7. cjcox

    I too have been using ntfs-3g for years. Hasn't give me any issues.

    1. oiseau
      Linux

      ... using ntfs-3g for years. Hasn't give me any issues.

      Indeed ...

      Since I started with Linux some years ago and only for completeness' sake, I have made sure I had NTFS access in every distribution that I have tried.

      But I cannot recall the last time I needed to look at a drive formatted with NTFS.

      O.

    2. Leathery Hawkeye

      NTFS-3g FUSE is incredibly slow write

    3. Peter Christy

      Me too! I don't have to access NTFS filesystems very often, but I have never noticed any issues with ntfs-3G and FUSE.

    4. Claverhouse Silver badge

      Same. ntfs-3g transparently does what is needed.

      Oddly enough today I am going to install a new drive [ GPT/EXT4 ] to replace an NTFS internal drive for videos [ when the new drive arrives DV ]. Suddenly files started disappearing like those stars in the sky in 'The Nine Billion Names of God'

      I can't prove it's because of NTFS --- the rest of the folders seem fine, and used daily --- but the only time I use it will be for portable drives used to transfer files to my unfortunate Windows-using friends.

      And whilst undoubtedly generous, whatever may be said good or bad about this gift, it's relevant to an old dying system. Even MS prefers to use it's replacement. For Linux it's rather like being given a 1973 Haynes Manual for some old rust-bucket when one's driving the latest BMW electric car.

  8. Anonymous Coward
    Anonymous Coward

    "NTFS is the default file system for Windows XP and later"

    NTFS has been the default file system for the Windows NT family since NT 3.1 was released in 1993. The latest version of NTFS (3.1) was released with XP however.

    1. bombastic bob Silver badge
      Devil

      Re: "NTFS is the default file system for Windows XP and later"

      In non-NT versions of Windows (i.e. '9x and ME), FAT and FAT32 were the defaults. XP was the first "NT merge" version, with no ''9x/ME" style back-end version available as an alternative. I think this is kinda what they meant. NTFS became "the default" for Windows (in general) starting with XP.

  9. martinusher Silver badge

    So?

    I'm beginnig to worry quite a bit about the state of the art of software developmet. In the overall scheme of things 27,000 lines of code isn't really that much, its a substantial chunk but it should break down into components that can be individually reviewed and tested. The lead for this really should be Microsoft -- for some reason they think they need to incorporate both the Linux OS and Linux-like capabilities into Windows so they'd benefit from being able to integrate properly with Linux filesystems.

    Personally, I'm not that bothered one way or another. The Windows filesystem is a mess of prehistoric drive letters, arcane terminology reaching back to MS-DOS 3.0 and general incompatibility with the rest of the world (although it invariably ends up copying generic filesystem features -- badly -- and claiming it innovated them). If you're working on a Windows platform then Cygwin does an adequate job of taming its weirdness. The current NTFS component works fine to get files from the Windows filesystem. So we're good, for the most part.

    1. NullNix

      Re: So?

      > 27,000 lines of code isn't really that much, its a substantial chunk but it should break down into components that can be individually reviewed and tested.

      Yes, and doing that was Paragon's job, not the reviewer's. It's not like it's hard to split a huge ugly pile of work into neat commits. Picking an example totally at random because it's one I'm familiar with: it's not quite as big perhaps, but I did that for 10,000-odd lines of work just last month, originally in perhaps 250 completely unreviewable use-git-as-a-backup-system commits with commit log messages reading things like "fix the fix" and "giant pile of unsplit work" (https://sourceware.org/pipermail/binutils/2020-June/112012.html). It took perhaps two days to split up six months or so of work.

      If you can't be bothered to do even that much to make your code easier to follow, I don't think it says much about your likely long-term commitment to the contribution or about your consideration for the maintainer you're dumping this stuff on.

    2. Anonymous Coward
      Anonymous Coward

      @martinusher - Re: So?

      You're right!

      If Microsoft truly loves Linux as it claims then why don't they prove it ? Why not give a hand or take the lead to porting NTFS, SMB, RDP and other pieces to Linux ?

      1. shade82000

        Re: @martinusher - So?

        Because they are still in the Embrace stage of their triple E approach.

    3. oiseau
      Facepalm

      Re: So?

      " ... for some reason they think they need to incorporate both the Linux OS and Linux-like capabilities into Windows ..."

      Some reason?

      Where have you been for the last 30+ years?

      Never heard of "Embrace, extend, and extinguish"?

      O.

  10. Anonymous Coward
    Anonymous Coward

    XP?

    NTFS has been around much longer than that. It was introduced in Windows NT 3.1.

  11. Maelstorm Bronze badge
    Boffin

    NTFS on Linux? Now who would have thought?

    The problem with NTFS is that it is big, bulky, bloated, and cumbersome. That 27,000 lines of code may very well be needed to have a full feature read/write driver for the file system. I have personally done kernel level code and I have written and implemented file systems and device drivers. So I know what the effort is to make something like that. Because NTFS is a foreign file system to Linux, you have to support all available options, which is no easy feat. Additionally, NTFS was developed in parallel with the original Windows NT. In my use of NTFS over the past 25 years or so it's been around, barring a hardware failure, I was always able to recover from a disk error. Yeah, it's from Microsoft. Yeah, it has it's issues. But for what it is, and the generation that it was written it, NTFS is actually quite good, IMHO.

    The Linux community should hunker down and start the code review process. Break it up into multiple files and call it a module. Go through and tweak the code and optimize it. Linux, and the *BSD's by extension, have needed a NTFS driver for some time, although in this day and age, the value of such is somewhat diminished.

    1. UBF

      Re: NTFS on Linux? Now who would have thought?

      I don't know why you got downvoted. I totally agree with the "I was always able to recover from a disk error" experience - I managed to salvage too many NTFS disks, even if it took dozens of hours of scanning. I even managed recently to salvage an SSD disk that wasn't recognised by Windows at all.

      I consider repairability as the most desirable feature of a mature file system. Therefore I would be very happy if Linux integrated NTFS to the kernel just for the new pathways for HDD data recovery..

      1. A.P. Veening Silver badge

        Re: NTFS on Linux? Now who would have thought?

        I don't know why you got downvoted.

        Not for the part you agreed with, as most of us (I thing) agree with that, but for:

        The Linux community should hunker down

        Putting an obligation on somebody without even discussing it before is just not done. And it isn't a small obligation either or something that is really necessary (NTFS support has been around in Linux for quite some time now).

  12. Henry Wertz 1 Gold badge

    Not sure there's a big problem

    "The Linux community should hunker down and start the code review process. Break it up into multiple files and call it a module. "

    Not really their job; the way it's worked for a long time, by the time a patch makes it to the Linux Kernel Mailing List it's really expected to be in form to include into the kernel. A project the size of the Linux kernel would be unmaintainable if they were spending all their time cleaning up not-up-to-standard patches, or accepting patches as-is that don't meet standards.

    That said... a) As others have said, there's already a FUSE-based read/write NTFS driver and has been for decades, it's not a matter of "accept these patches or Linux can't write to NTFS". I think the main benefit of an in-kernel driver is probably better benchmark speeds than going through FUSE; I've read and written to NTFS disks just fine over the years though.

    b) As far as I know, there's nothing definitive here about Paragon leaving this huge patch and running off; they may well be in the process of preparing to re-submit their patches. This is not necessarily some sign of inexperience on Paragon's part either; even Google has had this kind of thing happen (submit a giant patch for security or scheduler changes or something, have it rejected because it's more like a huge code dump than a patch.)

  13. Philip Hands

    Creative Outsourcing?

    Imagine the scene:

    Wide-eyed intern gets assigned their first task at Paragon: "Just get that lot to compile with the latest kernel will you?"

    ... some time later, a rather dejected intern reports that they cannot understand what's wrong, and having asked around it seems everyone that knew anything about the code left some time ago, so asks if it would be OK to bundle up all the GPLed bits and lob them over the fence at the kernel devs in the hope they might have some ideas how to fix it.

    That way Paragon gets to carry on charging people for the proprietary bits without having to spend money maintaining obsolecent code.

    I guess we'll see if that's got any truth to it by how closely they manage to track the versions of kernels they support as new versions come out. (the FAQ currently mensions "up to 5.6.x" ... was there anything in 5.7+ that breaks their code, I wonder)

  14. Anonymous Coward
    Anonymous Coward

    Well, it is a complex piece of code...

    I don't know if 27,000 lines is excessive... NTFS is quite complicated, after all.

    But all we have to do is compare Paragon's driver with Microsoft's NTFS driver for Windows.

    Oh.

  15. Andy Non Silver badge

    Ext4 ?

    When I first started using Linux I kept my external hard drives on NTFS for a year or two for compatibility purposes with Windows, but let that lapse as I fully adopted Linux. Now I format everything Ext4 which has worked fine for me for years now and I no longer need any Windows compatibility.

    Tell a lie, I do have one USB stick formatted as Fat32 to transfer the occasional file to/from the wife's Windows laptop.

    1. devTrail

      Re: Ext4 ?

      There is still some software around that runs only on Widows. So some people need dual boot on their laptops and they need access to the Windows partition from Linux since the other way round is a pain.

      The current drivers work well. The article says that Linux can't write NTFS, but the solution has been available for year.

      1. Steve K
        Coat

        Re: Ext4 ?

        There is still some software around that runs only on Widows

        A crumb of comfort for Andy Non's wife and her laptop if he pre-deceases her....

    2. A.P. Veening Silver badge

      Re: Ext4 ?

      For transfers to and from other computers I just use a folder on an Ext4 formatted disk attached to a small server at home.

  16. CAPS LOCK

    NTFS though...

    ...I'm not really that confident in NTFS on Windows, having had some err. problems, so a reverse engineered version is not high on my priorities...

  17. DenTheMan

    The hello world app......

    takes up 500 MB of space.

    The 'hello mad world' app, that is.

  18. devTrail

    Are there pending patents?

    I remember that Microsoft has the habit of patenting any single change they make to their code. TomTom was fined because Microsoft patented the patch that allowed to use names longer than 8 characters on FAT file systems and something similar was present in the Linux drivers. Isn't there the risk to go back to Microsoft threatening Linux users if this is included without a check on eventual pending patents?

  19. d3bug

    Mainline NTFS FUSE perhaps?

    I just have a quick question... I don't maintain any kernel code, nor have I looked at the existing FUSE NTFS code, but is there any reason the existing really well-working NTFS FUSE code can't be mainlined? It seems like it's old and stable enough at this point to eliminate the FUSE bottlenecks right? Wouldn't it be less work to change the FUSE code rather than include the monstrosity that Paragon dumped since NTFS FUSE is maintained regularly-ish? Or.... am I way off base here?

    1. devTrail

      Re: Mainline NTFS FUSE perhaps?

      There is a different aspect and it is what I asked in the post immediately above.

      FUSE with read and write capabilities is not inside the kernel, it must be added by the user after the installation (or sometimes it is done by the installer). It is a small issue from the technical point of view, but on the other hand it can be easily excluded by an organisation using Linux on a large scale. Including the same capabilities in the kernel is a big change from the legal point of view.

  20. vincent himpe

    Funny

    Open source community complaining that they have been giving the source ...

    Which leads me to this : if this driver is 'too large' , how is average "joe user" supposed to scan the entire operating system ?

    Next time i get the an answer along the lines of "it's open source, you can check it yourself" , i will point them to this driver ...

    1. Cynic_999

      Re: Funny

      It is not expected that a *single user* would be able to scan the entire Linux source code. A single user can however look at a particular issue or aspect, and many people looking at the code will hopefully eventually spot problems or find more efficient ways to achieve certain things.

      1. vincent himpe

        Re: Funny

        ok, so not the entire operating system. Let's say i have an issue with a driver.

        "but you have the source" !

        If the Linux maintainers barf at having to go over 27000 lines of code, how can you expect joe to do it ?

        See where this is headed ? Many drivers are MUCH larger than this...

        1. alisonken1

          Re: Funny

          As I read it -

          Most driver code has been through multiple reviews on smaller code dumps over time. IIRC most drivers started with initial submissions with baseline frameworks and can be tested and verified. Driver is marked as EXPERIMENTAL - USE AT YOUR OWN RISK during this phase. As each submission is vetted, additional functionality is added, and only the new changes need to be tested/vetted.

          27K lines of code added in 5K chunks can be tested/vetted rather easily.

          27K lines of code added in a single 27K chunk is not easily to test/vet.

          Also, keep in mind, once mainlined, it must be maintained by kernel devs. Or better yet, the original author/company (since they already know how it works).

  21. Cynic_999

    I wonder

    If the comments would have been similar had Paragon dumped 27000 lines of code at Microsoft claiming it would give Windows native EXT4 support, but Microsoft declined to include it in their next Windows build?

  22. sawatts

    Only 27k lines?

    I had to debug switch statements longer than that.

    1. Cynic_999

      Re: Only 27k lines?

      I have written a FAT32 file system in well-commented ARM assembler that was under 2600 lines.

  23. fredesmite2

    There use to be ntfs driver

    in CentOs and Ubuntu at one time....why do We need another?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like