back to article Problems for the Linux kernel NTFS driver as author goes silent

There are doubts about the future of the new read-write NTFS driver in the Linux kernel, because its author is not maintaining the code, or even answering his email, leaving the code orphaned, says a would-be helper. It took a long time and a lot of work to get Paragon Software's NTFS3 driver merged into the Linux kernel. It …

  1. beardman

    Paragon Software was founded and run by Russian folks

    Given the current events that involve Russia, it might very well be that they are hit by the sanctions, cannot work with the western companies and are completely off the grid. Heck, some of them even might have been taken to army and brought into meaningless war and faced death...

    1. wolfetone Silver badge

      Re: Paragon Software was founded and run by Russian folks

      "Heck, some of them even might have been taken to army and brought into meaningless war and faced death..."

      This was my thought, after thinking that maybe the poor guy was burnt out or suffering from other inner demons.

      Regardless, wherever they are, I hope they're alright.

  2. JimmyPage
    WTF?

    Hang on a mo ...

    Isn't one of the whole fricking points about "open source" that anyone can pick it up and develop (or at the very least) maintain it ? So it shouldn't kill a project if the author(s) go AWOL (or given the names, MIA) ????

    What exactly is needed apart from the source code here ?

    I understand why some projects die because there's no resource to advance them. But that doesn't seem to be the case here.

    1. Yet Another Anonymous coward Silver badge

      Re: Hang on a mo ...

      >What exactly is needed apart from the source code here ?

      Deep technical knowledge of the propriety Microsoft file system

      Deep technical knowledge of Linux Kernel and device drivers

      Deep software engineering experience to build complex software that millions of high performance systems are going to rely on.

      Deep experience with dealing with the open source community and the Linux kernel development process, politics and infrastructure

      The financial ability and personal circumstance to commit to a full time unpaid job supporting the software

      - things that anyone can pick up

      1. VoiceOfTruth

        Re: Hang on a mo ...

        -> Deep experience with dealing with the open source community and the Linux kernel development process, politics and infrastructure

        AKA the toxic LKML. Prepare to be shouted at. Prefer for, when your code has a bug, to be vilified by people who can't do the job that you did.

        1. stiine Silver badge

          Re: Hang on a mo ...

          I beg to differ. Anyone can write a bug and call it a driver.

          1. Kurgan

            Re: Hang on a mo ...

            Or an init system.

          2. R Soul

            Re: Hang on a mo ...

            Or call it systemd.

            1. Anonymous Coward
              Anonymous Coward

              Re: Hang on a mo ...

              systemd is different - it causes a burning and itching feeling that can only be abated by booting a CentOS 6.5 Live CD and remembering the "Before Times".

        2. sreynolds Silver badge

          Re: Hang on a mo ...

          You always have the right to remain silent. Especially around a certain time of the month when the finish guy seems to have hot flashes and issues controlling his emotions.

      2. MiguelC Silver badge
        Coat

        Re: Hang on a mo ...

        >>Deep technical knowledge of the propriety Microsoft file system

        >>Deep technical knowledge of Linux Kernel and device drivers

        >>Deep software engineering experience to build complex software that millions of high performance systems are going to rely on.

        >>Deep experience with dealing with the open source community and the Linux kernel development process, politics and infrastructure

        That looks just like it was taken out of my C.V.

        I might have embellished it a little bit, though

        1. TeeCee Gold badge
          Trollface

          Re: Hang on a mo ...

          You are working for Infosys and ICMFP!

      3. LDS Silver badge
        Devil

        Re: Hang on a mo ...

        You've just said that open source is quite useless since 99% of users and even developers can't read, modify or even build the code themselves without extensive training and support...

        And the 1% who can is actually already working professionallly on it.

        1. VoiceOfTruth

          Re: Hang on a mo ...

          My experience of open source over the last 25 years: a lot of people talk the talk, a lot of people say 'use open source', but they themselves cannot read something much more complicated than 'hello world'. Easy coding is easy, and difficult coding is not within the capability of a lot of people who advocate open source. Have a look. Pick a language of your choice. Go is a good choice being fairly new and also quite topical. It has net/http in its standard library. There are umpteen tutorials on the web for 'setting up a basic web server'. There are tutorials out there for doing trickier stuff, but they are far fewer in number. The Go Forum is good and helpful, and not full of snarky people.

          There are plenty of good programmers out there. Many of them are not involved in open source. Many of them are not interested in the Linux kernel (sometimes being put off by the frequent hostility that can be found on the LKML). You can see the reaction when the NTFS code was released: instant hostility, not a friendly welcome with some advice.

          27,000 lines of code is a fair amount to go through. Then there is the subject itself. NTFS is not open source. While you might know about ext[234], ufs, etc, unless you have been digging around and doing your own research, you won't know if the NTFS code offered is 'correct' (I'm using the term in a general sense). In the case of NTFS it probably is 99% of users would not be able to get to grips with it in a timely way. So where does that leave you? The 1%.

          1. Clausewitz4.0
            Devil

            Re: Hang on a mo ...

            You can probably add to rhe capabilities required: reverse engineering the NTFS protocol / library to make a compatible opensource driver.

            1. Liam Proven (Written by Reg staff) Bronze badge

              Re: Hang on a mo ...

              It's been done. There are three.

              [1] The kernel already contains an original, all-FOSS, reverse-engineered NTFS driver, but it's read-only.

              https://www.kernel.org/doc/html/latest/filesystems/ntfs.html

              [2] There is a 2nd FOSS driver, from Tuxera, but it runs under FUSE.

              https://github.com/tuxera/ntfs-3g

              It works fine but using FUSE means that performance is limited and it is not possible, for instance, to boot from it.

              [3] The newest one, now FOSS, written by Paragon Software. All FOSS, native in-kernel driver, could be bootable if desired.

              But it's very big and very complicated, _because_ NTFS is big and complicated and not at all xNix-like.

        2. midgepad

          Re: Hang on a mo ...

          No, you appear to have said that.

          And it isn't true.

          If you need an NTFS driver in Linux, nobody can stop you arranging for one to exist.

          That isnt a guarantee that anyone will help you.

          If we don't need one, the utility of FLOSS or Linux is not diminished.

          Your logical flaw is visible on inspection.

        3. Yet Another Anonymous coward Silver badge

          Re: Hang on a mo ...

          >You've just said that open source is quite useless since 99% of users and even developers can't read, modify or even build the code themselves without extensive training and support...

          That's true of core parts of the operating system, but not for lots of other open-source libs.

          In my field, image processing / machine vision you have a lot of libraries where features and algorithms are implemented by experts in the field who might not be expert software engineers. But as long as it works it's useful, and perhaps the code will later be optimised by a more experienced programmer who can spot inefficiencies even if they aren't experts on the subtleties of the math

        4. LionelB Bronze badge

          Re: Hang on a mo ...

          > ... since 99% of users and even developers can't read, modify or even build the code themselves without extensive training and support...

          A bit like closed-source, then.

      4. IGotOut Silver badge

        Re: Hang on a mo ...

        @ Yet another....

        Let me list few teeny companies that tick the boxes.

        Google

        Amazon

        Microsoft

        IBM / Redhat

    2. VoiceOfTruth

      Re: Hang on a mo ...

      -> Isn't one of the whole fricking points about "open source" that anyone can pick it up and develop (or at the very least) maintain it ?

      If you know what you are doing, yes. Most people don't. And those that do are usually employed somewhere. Why do you think the whole subject of a read write driver for NTFS has been around for decades?

      1. martinusher Silver badge

        Re: Hang on a mo ...

        You're also going to be shooting at a moving target. Microsoft doesn't exactly go out of its way to make its technology transparent or stable because that invites competition. This is why you've never seen an ext4 subsystem for Windows (for example) despite Microsoft's claimed ongoing commitment to support Linux -- it would require a commitment to something they can't control.

        I would certainly join such an effort but although I've got more than enough experience in these areas I don't think I have the necessary knowledge, patience and persistence to do anything other than be a code jockey. I suspect I'm not alone -- interacting with Microsoft's technology has been an exercise in frustration since the beginning, its usually a straightforward and not necessarily novel concept buried under multiple layers of obscurification. Its a haven for puzzle solvers, those who love 'cracking the code', but not those of us who have a more goal oriented mindset ("we just want to store and retrieve stuff").

        1. Ace2 Bronze badge

          Re: Hang on a mo ...

          Is the moving target thing still true? There is a massive installed base of Windows Server 2012, 16, 19, 22 - at some point it would be in their own interest to stop fiddling with the on-disk format.

          1. WolfFan Silver badge

            Re: Hang on a mo ...

            Paragon’s drivers on Mac supports NTFS all the way back to NT3.1. I have been using it for years, on multiple Macs, and have never had a problem reading or writing to devices ranging from thumb drives to internal drives temporarily attached using assorted IDE (I’ve had it for a _long_ time) and SATA adapters to external USB/FireWire/Thunderbolt devices. I suspect that the problem is on the Linux end. Or due to, ah, current affairs.

          2. AndrueC Silver badge

            Re: Hang on a mo ...

            Depends what changes. NTFS is a very open-ended format. At its heart an MFT (Master File Table) record is a collection of fields (they call them attributes). An attribute has a length and a type field. Both these fields are 32 bits wide so MS can add new attribute types pretty much ad-nausseum.

            I reverse engineered it when I was writing data recovery software many years ago and after initially disliking it for the lack of thought given to data recovery (HPFS did a much better job in that respect) I developed a grudging like for it. It is pretty much an almost infinitely modifiable database format(*) and in many respects very efficient. Admittedly the efficiency is mostly needed because of the overheads it brings but the result is a very capable file system.

            (*)I actually utilised that in our toolset. Once I had the NTFS driver in place additional filesystems were dealt with by converting their metadata to our NTFS metadata and passing the result to the NTFS driver. An NTFS file system parser can handle pretty much any other file system including HFS/HFS+ and even weird stuff like AS/400. NTFS offers pretty much a superset of all file system functionalty.

            1. Gene Cash Silver badge

              Re: Hang on a mo ...

              OK, AndrueC for maintainer... any other votes?

              Sorry bub, you've been conscripted! Here's yer Model M keyboard and EMACS reference jug :-)

              Seriously though, it's a shame the GOOD Microsoft code is so obscure and deeply buried.

          3. jgard

            Re: Hang on a mo ...

            No, it's not still true, and people who offer these opinions either:

            i) Have very little experience and just jump on the Microsoft is shit bandwagon because the know no better, or:

            ii) Haven't tried to develop on Windows in donkey's years, and think NT4 SP6a is still state of the art, along with shit documentation, inconsistent development standards and an absence of conventions for coding, style, documentation or naming, i.e. the late 1990s.

            Their last experience with Windows was 20 years ago when they spent 2 weeks ploughing through a 300 page white-paper attempting to set up a cluster that wouldn't work because there was an umlaut in one of the serial numbers. Things have changed a little since then.

            I have no great love of Windows, I've not touched it in over a year, and I'd never use it personally on my desktop due its other purpose in delivering advertising and spyware. But the truth is that Windows server is now a very capable and secure platform, the dev docs are excellent and accurate (unlike 20 years ago), and IMO they make the best dev tool on the market in visual studio.

            As for NTFS, people here are still banging on about it being crap, they're simply wrong. NTFS is an incredibly good filesystem, it is up there with the very best. It was way ahead of its time with some features, and due to its very well implemented preemptive journaling it is extremely robust and resilient.

            People seem uncomfortable that Microsoft makes some excellent technology now, and I just don't get it. Yes, they are a tech giant that has done dodgy stuff in the past. But come on, compare them to Google, Facebook, Oracle, Apple, Amazon - in terms of ethics and trustworthiness I would put MS above any of that bunch.

            1. VoiceOfTruth

              Re: Hang on a mo ...

              -> NTFS is an incredibly good filesystem, it is up there with the very best

              That's a bit too broad. It is not as good as ZFS. It does not have per block checksums, for example. It doesn't have built in replication. And so on. NTFS is a reasonable file system, but it's not up there with the very best.

              1. Yet Another Anonymous coward Silver badge

                Re: Hang on a mo ...

                I think NTFS is one of those "difficult 2nd album" projects. It has every feature everyone could ever imagine a file system having (at least in the mid 90s) most of them partly implemented or left 'specification as implementation'

                1. jgard

                  Re: Hang on a mo ...

                  Ok, and what does that mean exactly? What does NTFS lack? What has been partly done but unfinished? What are the real world ramifications of these issues you speak of?

                  As someone who works on this stuff a lot and has done for a long time, I do not recognise what you are saying. It's easy to make these general points. And it's even easier to hide behind them, sage-like and thoughtful, when their vagueness is such as to render them completely unfalsifiable.

                  I don't mean to be rude or confrontational, but I do get bored with this sort of stuff. So, here goes: what you said offers nothing in the way of useful information. So, could you please provide us with specific, accurate and falsifiable claims to back up your vague suggestions? Otherwise this sort of stuff is little more than noise.

                  1. martinusher Silver badge

                    Re: Hang on a mo ...

                    >What does NTFS lack?

                    I'm only looking at it from a user's perspective but for me it lacks mountability -- every 'ix' filesystem can be mounted on a mount point that could be anywhere, you don't end up with pathing problems. I'd like to think that the linkage to a physical device, the lack of file links (and the long-past-its-sell-by-date backwards file separators) are just a driver idiosyncrasy but its difficult to tell and its more trouble that its worth finding out.

                    Also I suspect that there are components of the file system that are physically fixed in place on a disk. Its difficult to resize a NTFS partition and may be the reason why disk performance (along with Windows) performance drops off over time.

                    There may be some great technology lurking in there but until the wrapper's a bit better organized I'll just use it as a read-only system from Linux.

                    1. AndrueC Silver badge
                      Boffin

                      Re: Hang on a mo ...

                      Your complaint about mount points might be valid but I have my doubts. I can't think of anything about an NTFS volume that requires it to be treated as a self-contained volume. Windows links can link any folder structure anywhere so you can make 'D:\' appear under c:\Mounts\ if you wish (I actually do that on my development machine). I also think that assigning a drive letter to a volume is optional so I think you can indeed mount an NTFS volume under an existing folder structure just like you can with Unix variants.

                      Also I suspect that there are components of the file system that are physically fixed in place on a disk.

                      No. Only the boot sector which is fixed at LSN 0 (possibly also the recovery boot sector at the other end of the volume). Everything else can be anywhere else within the volume. The boot sector specifies the location of $MFT (Master File Table) and everything else (including $MFT) is a file whose location is specified by a record in $MFT.

                      Its difficult to resize a NTFS partition and may be the reason why disk performance (along with Windows) performance drops off over time.

                      What difficulty? The DiskManager can do it in a couple of mouse clicks. Extend the partition. Grow the volume. If no space to extend partition, span the volume onto another partition. The only structures that might have to move are the recovery boot sector and $MFTMirror which have to be moved if you shrink the volume (because they are normally placed at the end) and could be left in place if you grow it but which MS might recommend are relocated to the new end-of-volume.

                      Any drop-off in performance can be attributed to the usual OS-agnostic reasons: Software installation and disk fragmentation. On the latter front I will say that NTFS doesn't do much to avoid fragmentation - HPFS went to great lengths to do so. But then in my experience no file system was as determined as HPFS to keep files contiguous so I suspect that NTFS is no worse in that respect than any other OS.

                      System performance slow-down, yeah. Maybe. Can't say I've ever noticed it nor attempted a side-by-side comparison. Maybe it's like the old 'boiling frog' thing :)

                      But we can't blame NTFS for that. I'd say blame users for installing random crap on their machines and - possibly - software publishers for writing shitty code and being lazy in clean-up.

                2. Liam Proven (Written by Reg staff) Bronze badge
                  Joke

                  Re: Hang on a mo ...

                  NTFS was the _third_ album. :-)

                  The début was FAT. There are various remixes and edits, including FAT12, FAT16, VFAT, FAT32 and most recently the remastered exFAT.

                  Huge number of fans and admirers but now recognised as very basic and with limited instrumentation. Hardcore lovers don't care for the later versions with extensive post-production added.

                  The difficult middle album was OS/2 and the now-mostly-forgotten HPFS, which resolved all of the issues of the first record but introduced a lot of new ones.

                  Sadly the band split up as a result of recording this.

                  One half stuck with it until making a one-off supergroup with some members of AIX. They're still touring, under new names (EComStation, ArcaOS) but mainly play the now-much-admired JFS.

                  The other half made the difficult third album, called NT, with some former members of DEC, and you can hear the influence of DEC's seminal "VMS" in it if you listen carefully.

                  It's where the much-loved but difficult and somewhat inaccessible NTFS came from.

                  1. AndrueC Silver badge
                    Boffin

                    Re: Hang on a mo ...

                    It's where the much-loved but difficult and somewhat inaccessible NTFS came from.

                    Ain't that the truth. I reverse engineered NTFS first (because Windows data recoveries were far more common than Files-11) but as soon I started looking at Files-11 the provenance became clear. Even down to encoding variable length bit fields (it's different but very few file systems bother at all). $MFT is clearly son-of-INDEXF.SYS.

                    The biggest problem we had with FILES-11 was returning the data to the customer. We used our generic Windows tool to process the disk image so that meant we were at risk of losing the fairly important attributes. In the end I modified our tool to generate a separate VMS script. Then we could copy the files and the script to our lonely VMS box and run the script to restore the attributes. Luckily one of our guys knew VMS quite well and could help with the directory path syntax.

                    We had a similar problem with Macintosh disks but thankfully Windows/NTFS supports Macintosh clients so we could write the files in a compatible way and have the Macintosh pull the files back over the network. Later on we bought in a program that could mount HFS and HFS+ volumes under Windows so we could just write straight to a target disk. We had a celebration in the office when that became available. No-one was forced to use the Macintosh in the corner any longer :D

              2. AndrueC Silver badge
                Boffin

                Re: Hang on a mo ...

                True but it's possible they could be added without breaking the original design just by adding another attribute type. The file system is fairly unusual in having such flexibility. Parsing an MFT record is just a matter of walking the list of attributes (although one or more attributes might actually be stored external to the record - $DATA being the most common to be held externally for obvious reasons). In fact a record may itself become so large that it occupies multiple MFT records and I seem to recall uses the same mechanism for specifying its external blocks as other attributes do.

                So in theory at least you can store any metadata you can imagine in an MFT record and it's just a matter of the OS reading the disk knowing what to do with it. In that sense it really is a lot like a database that you can use to store pretty much any schema.

                Whether it would be efficient or not is another matter. I would assume that the MFT parsing code in the Windows Kernel is highly optimised (presumably that's why they went with 32 bits for the attribute type since that is clearly overkill).

                1. Warm Braw Silver badge

                  Re: Hang on a mo ...

                  it really is a lot like a database - and WinFS would have been more so...

              3. jgard

                Re: Hang on a mo ...

                Well, I would suggest that your statement is too narrow. How can you say it's not as good as ZFS? On what criteria do you decide that?

                The very reason I said "up there with the best" is that it's a broad statement that is demonstrably true. Claiming one is 'better' than the other is just silly, and meaningless. NTFS and ZFS are both excellent, but they have different use cases. An environment using NTFS will usually be different to a ZFS environment - the OS will be different for a start and so will the storage platform.

                NTFS is most often used on top of a hardware RAID layer (from a mirror in a server, all the way to an HP XP enterprise SAN). The standout features of ZFS are the software RAID capabilities it offers. NTFS hands off some responsibility to the RAID layer, allowing it to deal with issues like scrubbing / zeroing, striping, spindle workload distributions, file and block integrity etc. ZFS does a lot of that stuff itself. That's is exactly why it has built in checksums and NTFS doesn't. Like all engineers, those working at Microsoft have to make tradeoffs and compromises. In NTFS, they came to the conclusion that checksums would involve more CPU cycles, RAM etc, but give virtually no real world benefit. And they were correct, because the number of issues that it would fix in the real world is less than insignificant compared to the performance gain.

                Regarding replication - have you not heard of DFSR on NTFS? It's available on all Windows Server OSes, and it is one hell of a replication engine. It uses remote differential compression to replicate whole volumes between hosts, scaling to huge mesh topologies with dozens of servers. And it does it using a fraction of the bandwidth it would normally require - on general file data it usually saves upwards of 90% in bandwidth.

                It absolutely IS up there with the very best, just like ZFS. Both are fantastic, but have slightly different strengths and use cases.

                1. VoiceOfTruth

                  Re: Hang on a mo ...

                  -> On what criteria do you decide that?

                  I gave you a couple of examples.

                  -> The standout features of ZFS are the software RAID capabilities it offers.

                  That is another of the criteria.

                  -> have you not heard of DFSR on NTFS?

                  It is nothing like ZFS replication, which is eons better.

                  1. jgard

                    Re: Hang on a mo ...

                    Is that really the best you can manage mate? I addressed your claims; I explained the tradeoffs inherent in a calculating checksums, I showed you were wrong about replication. And you double down by moving the goalposts and claiming checkmate? I wouldn't have bothered.

                    You clearly lack a mature and experienced understanding of engineering principles, design decisions and cost benefit analysis. You struggle to understand that filesystems are designed to deal with different scenarios and use cases. They are not all designed to have exactly the same strengths and weaknesses. I would go into more detail, but most people who read the reg understand this stuff implicitly, and I would be insulting their intelligence by repeating it here.

                    And yes, NTFS is up there with the best file systems, it also has different use cases to ZFS. Both are excellent and for you to double down on your original claims, simplistic and ignorant as they were, shows you don't get it.

                    Your 'criteria' and 'examples', are silly and ill defined, your argument is incoherent, and looking at the upvotes, the experienced techies around here agree with me.

                    1. VoiceOfTruth

                      Re: Hang on a mo ...

                      -> I explained the tradeoffs inherent in a calculating checksums

                      I'm afraid you don't know what you are writing about. That's OK, I can educate you.

                      NTFS is an old file system. There is nothing wrong with that per se. Checksumming at the time when CPUs were much less powerful was too 'expensive' then. There was a paper I read about 10 years ago about ZFS check sums and why they were incorporated into ZFS. It was considered generally at the time that check summing was too CPU expensive (like it used to be). But with testing it was found that CPUs had advanced sufficiently that it was no longer a problem. Checksums are no longer a tradeoff and haven't been for at least 10 years - your file system either has them or it doesn't. NTFS does not.

                      So no NTFS is not up there with the best file systems. The fact that you don't know about the 'cost' of checksums tells me that you have very limited knowledge on the subject.

                      Checksums are not silly and ill-defined. Nor is replication. The fact you describe them as such is mind boggling. The fact that you have upvotes tells me those techies who gave you those votes don't know what they voting for either. No problem. I've met a lot of blaggers in my career. People who talk the talk, but have no trousers.

                      Let me add another feature where ZFS is way better than NTFS: snapshots. Before you even think about mentions Volume Shadow Copies, they are not as advanced as ZFS. ZFS snapshots are instantaneous. While Volume Shadow Copy can be quick, it is not the same as instantaneous.

                      And another - journalling on ZFS is done at the block level. Data on disk is always consistent - unlike with NTFS.

                      Now, if you have trouble understanding some of these things I can point you to some basic tutorials.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: Hang on a mo ...

                        You should marry ZFS then.

            2. midgepad

              Re: Hang on a mo ...

              Microsoft didn't exactly write NTFS did they?

              Came in from another system with its maintainers IIRC.

              1. Yet Another Anonymous coward Silver badge

                Re: Hang on a mo ...

                It was invented by Dave Cutler's team that came from DEC and it does bring some of the features from the VMS rather complicated system.

                Except for the really nice user facing version system.

          4. Anonymous Coward
            Anonymous Coward

            @Ace2 - Re: Hang on a mo ...

            If you know exactly what you're doing, you can get away with it.

            Microsoft doesn't need to fiddle with the on-disk format. All they need to do is to "slightly tweak" API's and specs without publishing the modifications. Just look at SAMBA project.

          5. Anonymous Coward
            Anonymous Coward

            Re: Hang on a mo ...

            "at some point it would be in their own interest to stop fiddling with <insert Microsoft Product name here>"

            This has long been a blind spot for Microsoft. They'll have something that's reasonably decent, buy fuck it up with massive unneeded and unnecessary changes in the next version. It's almost like their internal culture only values Revolution, and has the lowest regard possible for Evolution. Then, too, I guess they have to have something to keep their 20,000+ programmers busy

          6. Anonymous Coward
            Anonymous Coward

            Re: Hang on a mo ...

            "at some point it would be in their own interest to stop fiddling with the on-disk format."

            Yeah, and they should adopt Notepad as their standard coding editor, it's been around and un-fiddled with forev...oh... crap.

        2. WolfFan Silver badge

          Re: Hang on a mo ...

          Hmm… I have Paragon’s NTFS drivers on various Macs around here. I have had it installed for years. Is it really that much harder to get it to work in Linux than macOS?

          1. chuBb. Silver badge

            Re: Hang on a mo ...

            It's not "hard" to use or install its hard to maintain

            And it was a saga to get it included with kernel i.e. A standardised bit of the Linux os not a 3rd party add on

          2. Liam Proven (Written by Reg staff) Bronze badge

            Re: Hang on a mo ...

            Have you got the source code for it?

            Does it run inside macOS' xnu kernel?

            I don't think so... :-)

        3. bazza Silver badge

          Re: Hang on a mo ...

          This is why you've never seen an ext4 subsystem for Windows (for example) despite Microsoft's claimed ongoing commitment to support Linux -- it would require a commitment to something they can't control.

          Er, I've seen plenty of ext4 druvers, just not from MS.

          Windows allows you to write a file system driver. In a way it is more open than Linux; you can maintain it regardless of what MS do. You don't have to persuade anyone else like the LKML...

          1. AndrueC Silver badge
            Boffin

            Re: Hang on a mo ...

            Windows allows you to write a file system driver.

            Indeed it does. Has been for a long time.

            It's how networks and CR-ROMs are dealt with amongst other things.

          2. eldakka Silver badge

            Re: Hang on a mo ...

            > Windows allows you to write a file system driver. In a way it is more open than Linux; you can maintain it regardless of what MS do. You don't have to persuade anyone else like the LKML...

            You don't need to persuade anyone - or even discuss anything with anyone - on the LKML to write a filesystem driver.

            You only need to do that if you want the fs driver to be incorporated directly into the monolithic Linux Kernel as an embedded - out of the box - open source featureset of the Linux Kernel.

            Are you saying it's easier on Windows than it is in Linux to get your filesystem driver merged into the Windows Kernel and be released as part of the official O/S as distributed directly from Microsoft?

            1. bazza Silver badge

              Re: Hang on a mo ...

              Except that, where MS's driver interfaces are generally very stable - an aid to the independent developer - things are somewhat unpredictable in Linux. That does not help the independent developer.

          3. Jou (Mxyzptlk) Silver badge

            Re: Hang on a mo ...

            You can always write your own filesystem driver for linux. You just won't get it into the official kernel - that is the actual topic here.

  3. JimmyPage

    Righy ho ...

    So the problem is "just" a lack of technical knowledge ?

    Or (prepares hard hat) is it less the technical knowledge, and more the technical knowledge that is able to negotiate the byzantine kernel development process ?

    Not the greatest advert for Linux then.

    Surprised MS hasn't taken ownership of this. What with their linux for windows project.

    1. Anonymous Coward
      Anonymous Coward

      @JimmyPage - Re: Righy ho ...

      Add some extra cushioning, Jimmy.

  4. oiseau Silver badge
    Facepalm

    Light on the issue?

    If you can help shed light on the issue ...

    Sure.

    This was a bad bad idea from day one.

    The first indicator of the state of things was the useless garbage rant from Linus Torvalds.

    And now we have ~27K lines of code added to the kernel that, for whatever reason, have no maintainer.

    Like I said last year:

    It's much too little, far too late and with excess caveats attached.

    ------------------------------------------------------------------------> Timeo Danaos et dona ferentes <------------------------------------------------------------------------

    O.

    1. VoiceOfTruth

      Re: Light on the issue?

      -> The first indicator of the state of things was the useless garbage rant from Linus Torvalds.

      There was an indicator before that. The very first response when it was announced on the LKML was 'So how exactly do you expect someone to review this monstrosity ?'

      Welcome to the LKML, your call is not important to us, we don't care if you are offended, all you will receive here is nasty comments.

      Let me rewrite that for the benefit of the ungrateful: Wow! That's a huge offering, 27,000 lines of code. It's a bit too big for us to handle in one go. Can we discuss how to better present this. Thanks for your work!

      1. LosD

        Re: Light on the issue?

        BS. Garbage is garbage.

        Should I also thank you if you dump a ton of manure in front of my house? It certainly has value to some, so I guess I should be grateful...

        If you want code into the kernel, make an effort, don't just throw code over the wall and expect others to fix the mess.

        1. VoiceOfTruth

          Re: Light on the issue?

          Evidently you don't know what you are commenting about. Torvalds was talking about GitHub when he was referring to 'garbage'. An exact quote: 'github creates absolutely useless garbage merges'.

          Paragon NTFS has been in use reliably for a very long time. It is not garbage. Your comment though? 100% manure.

          1. Anonymous Coward
            Anonymous Coward

            @VoiceOfTruth - Re: Light on the issue?

            Linus was right when he pointed to the fact that this addition as it was presented, is unmaintainable. You're selectively quoting him.

        2. jgard

          Re: Light on the issue?

          That's a very poor analogy.

          If someone throws manure over your wall, it's a destructive and deliberately adversarial act. There's no good will involved, they are not trying to help you. It doesn't take any real time or effort on their part (they probably just slipped a few quid to the guy that did it). Most importantly, the cleanup is time consuming, expensive, unpleasant and takes FAR MORE EFFORT than that expended by your adversaries who chucked it over your wall.

          Paragon worked hard for a long time to get to the point when they first submitted their code. Twenty seven thousand lines of kernel level code of their own backs. They did it to help; to contribute to the biggest and most successful open source project on the planet. It was all about good will and collaboration, and they did it for free. Most importantly, the cleanup is trivial in the extreme, if it needs some attention that can be conveyed with a few words. There is NO requirement for the maintainers to clear loads of shit up from their garden, a simple polite email or two will suffice.

          Think of it in terms of thermodynamics, Paragon created order out of disorder. That takes enormous effort, and requires next to no effort to reverse. Leave it all alone and let the 2nd law do its stuff, entropy wins. Dumping loads of crap on you clean garden achieves the opposite, it turns your lovely ordered patio and plants and turns them into one big shit-smelling lump of entropic mess. The 2nd law won't help you there, which is why you need to put in MUCH more effort to get back to where you were.

          The paragon guy (or guys) was naive in terms of his knowledge of the contribution process. We all are naive when joining a community and helping out, but in most communities you can expect to be welcomed; especially when you are working (hard) for them, for free.

          The point is we are all human, and it's nice to be encouraged and appreciated. IT people are no different, and that includes you, my friend. Furthermore, some people have mental health issues and struggle with self-esteem. To be publicly castigated on a forum by people you respect - and likely feel inferior to - can be hugely damaging to them. You don't have to be a 'snowflake' to understand that, it's basic humanity.

          Open source is fucking brilliant, the current prominence of OSS in the industry is best thing to happen in IT for decades. Contributors should be valued and nurtured, and if they make a mistake when they start out, they should be encouraged. If there are people acting like 14 year olds - accusing newbies of throwing manure over a wall, why will people do it?

          Why the attitude? What's the point? And what does it do to help OSS? People need to start acting like adults. I know LT ain't perfect but he has definitely improved over the last couple of years, maybe you should take a leaf out of his book? Just sayin.

          1. eldakka Silver badge

            Re: Light on the issue?

            That's a bit revisionist, isn't it?

            > Twenty seven thousand lines of kernel level code of their own backs.

            They already had a closed-source NTFS driver. Most of that 27k lines had already existed for years in their closed-source, commercial driver. Those 27k lines were just a 'polishing' of what they already had to make it more open-source friendly.

            Then, rather than contacting someone and saying "hey, we're working on moving this previously closed-source driver to open-source and merging it into the kernel, can you give us some process advice and code-review - or recommend those who could", they just decided to make a brand-new, totally unexpected out of the blue pull request of 27k lines of unvetted 'surprise' code. And expected it to be merged into the kernel.

            The manure analogy above is entirely appropriate. Of course manure is useful. It's a great fertilizer for plants, so if you have a grren thumb and work to a nice garden, or even have a farm, it's extremely useful. If you know it's coming and you've had time to prepare where to put it, informed the delivery agency where - and how much - to unload it, and had preparations in place of where it is most effective on the garden, and how to get it there. Someone turning up and unexpectedly dumping 10 cubic metres of manure on the pristine, prefectly kept grass of the front display=quality--lawn you show off to visitors and passersby in the front yard of the house, rather than the 2 cubic metres you could find a use for around the back, next to the barn, on the bare dirt patch you could have prepared beforehand if you knew it was coming, next to the shovels, wheelbarrows, ferilizing equipment, etc., which you could arrange to have some assistance with distributing to the places you have prepared for it is not helpful. One might even call it arrogant, crass, ignorant, which is apparent from the tone of the kernel maintainers.

            The blame here lies soley with Paragon and their attitude to the kernel developers evident in their actions and the way they initially went about adding their code to the kernel.

            1. VoiceOfTruth

              Re: Light on the issue?

              -> They already had a closed-source NTFS driver

              Boo hoo. You don't like the fact that once upon a time it was closed source. Stand by that statement and give up Mozilla and LibreOffice for a start. Go back to Gnumeric.

              -> The manure analogy above is entirely appropriate.

              No it isn't. Did you read the code? I don't suppose you did. Perhaps you are not competent enough to determine whether something is manure or not.

              If I was Paragon I would not have anything more to do with Linux. The Linux world is full of miserable sods who have manners.

              1. eldakka Silver badge

                Re: Light on the issue?

                > Boo hoo. You don't like the fact that once upon a time it was closed source.

                It's got nothing to do with whether it was closed source or not beforehand. The point was that you said:

                > Paragon worked hard for a long time to get to the point when they first submitted their code. Twenty seven thousand lines of kernel level code of their own backs.

                Implying that out of the goodness of their hearts they went and wrote 27k lines of code for the Linux kernel.

                My point is that this is not true. Those 27k lines of code already existed in their original closed-source code. They did fuck-all 'work' for the Linux kernel. They did some updates to their already existing code to incorporate it into the kernel. They did a small amount of work "off their own backs" for the Linux kernel, namely the aforementioned modifications before integrating it into the kernel. Whether it was previously open or closed is only relevant to the fact it already existed, and the work they did previously to create that code was done for commercial purposes, that is, they were paid to do that work therefore it was not "off their own backs" as you put it. So crediting them with doing "27k lines of work off their own backs" to put it into the kernel is misplaced. They certainly get credit for open-sourcing what was previously closed-source code. Absolutely. But that is not what is at issue here, it is the way they went about it that is at issue. Therefore you throwing around "27k lines of code" and implying they did a lot of work for the Linux kernel in creating these 27k lines of code is revisionist history with the purpose of misleading readers, which leads one to wonder if you have a relationship with the dev team or organisation.

                > No it isn't. Did you read the code? I don't suppose you did. Perhaps you are not competent enough to determine whether something is manure or not.

                You understand that manure is a fertilizer, right? That it is actually good shit? My whole example showed how manure is good stuff, great for fertilizing plants. It has its uses. If used appropriately manure is awesome. However, something great if treated badly is bad. My usage of the manure analogy wasn't to speak to code quality, it was to continue using an analogy you and others had already started. It was to show that if an appropriate process is followed shit is great, but if shit is treated like shit, it's just shit - a stinky mess no-one wants and has to be cleaned up by those who didn't make the shit.

                1. VoiceOfTruth

                  Re: Light on the issue?

                  -> Those 27k lines of code already existed in their original closed-source code

                  So what? Give up LibreOffice immediately if you use it. If you don't like something that was once closed source you have a petty attitude. Give up MySQL too.

              2. Missing Semicolon Silver badge

                Re: Light on the issue?

                It is entirely possible that as a closed -source codebase, it can be shipped as a product, even if the code is an unmaintainable uncommented mess. In order to be part of the kernel the code must be clear and understandable.

          2. VoiceOfTruth

            Re: Light on the issue?

            Note that even when you explain it as kindly as you do, some people will still give you thumbs down. I think for some people Linux is a cult.

            As for contributing code, it is not as though every contributor to the kernel is a genius programmer - far from it. Go and read the LKML for the tiny but necessary changes which occur every day.

      2. Anonymous Coward
        Anonymous Coward

        @VoiceOfTruth - Re: Light on the issue?

        If you give sh*% to someone, at least wrap it in a shiny packaging. And don't be offended if you don't get gratitude in exchange.

        1. VoiceOfTruth

          Re: @VoiceOfTruth - Light on the issue?

          All you miserable people out there, that is your response to somebody who gives you a chunk of code which you have asking for for 25 years or more. It wasn't sh*%. The fact that you term it that tells the world enough about your attitude.

          The world does not owe Linux a sodding thing.

          1. Androgynous Cupboard Silver badge

            Re: @VoiceOfTruth - Light on the issue?

            You don’t like Linux. We get it. Enough already.

            1. VoiceOfTruth

              Re: @VoiceOfTruth - Light on the issue?

              Wrong.

    2. Tim99 Silver badge
      Windows

      Re: Light on the issue?

      A closer translation from the normal is "I fear the Greeks even when they bear gifts".

      After developing stuff, including shrink-rap, in an MS environment I thought that might a reasonable assessment. Admittedly from someone who started to make long-term plans for retirement when I looked inside Vista...

  5. Anonymous Coward
    Anonymous Coward

    Nitpick

    > A "PR" is Git's somewhat unintuitive term for the process of asking a project's managers to incorporate changes

    "Pull request" is Github speak. GitLab call it a "merge request". In git I don't recall using a specific name.

    1. ICam

      Re: Nitpick

      I think it's also fair to say it's not "unintuitive".

      1. Will Godfrey Silver badge
        Facepalm

        Re: Nitpick

        Indeed. Seeing as you 'Pull' code from a repository, a 'Pull Request' is about as obvious as it gets.

        1. Anonymous Coward
          Anonymous Coward

          Re: Nitpick

          > Seeing as you 'Pull' code from a repository, a 'Pull Request' is about as obvious as it gets.

          Err… behind the curtains a Github "pull request" is actually a git merge.

          A git pull itself is a fetch followed by a merge onto the local branch (automatic if you haven't changed the defaults and a fast forward is possible), which is a different thing.

    2. bombastic bob Silver badge
      Unhappy

      Re: Nitpick

      I've never liked the term "pull request". I am not making a request to do a 'git pull'. It is actually a request for the maintainer to merge what I did in my fork of his repo. "Merge Request" does what it says on the tin.

  6. Zanzibar Rastapopulous

    Sauce...

    It strikes me that the problem isn't so much the closed source code, as the closed standard.

    Standards seem to have been abandoned a lot recently, it's time they were more tightly imposed upon companies by governments.

    1. ThatOne Silver badge
      Devil

      Re: Sauce...

      Standards stifle monopolies, so they need to be avoided or bypassed wherever possible. A good old proprietary heavily patented tech = profit!

      1. Anonymous Coward
        Anonymous Coward

        Re: Sauce...

        Except of course the standards org monopoly on the standards you need...

        1. ThatOne Silver badge
          Facepalm

          Re: Sauce...

          > the standards org monopoly

          The organization setting standards is (or at least should be) a non-profit, so your argument is just unsubstantiated claims supposed to preemptively discredit the target.

          The only "monopoly" a standards organization should have is in setting those standards, and that's actually the whole point, not a flaw.

  7. GraXXoR

    Obligatory XKCD

    https://xkcd.com/2347/

  8. drankinatty

    Ouch! The Unforeseen Consequence of Geo-Politics, an Insecure Dictator and Open-Source Fielsystem...

    Linux has had a bad run of luck with filesystem developers. One forced to abandon a wonderful early journaling filesystem after being jailed for killing his wife, and now after the much anticipated merge of read-write NTFS into the kernel, the developers disappear (not unlike the political rivals, dissidents or disloyal oligarchs associated with the same insecure dictator). Yikes.

    The challenge here is the size of the project and specialized knowledge involved on the NTFS side. Not common among typical open-source developers. The knowledge and talent exists in the community, the trick is getting the right people together and in sufficient number to prevent the project from being immediately overwhelmed (both from a technical and man-hours standpoint).

    It's a shame SuSE (or other corporate sponsor) wasn't involved in getting the Paragon code into the kernel (as it was promoting Reiserfs as the default filesystem for SuSE/openSUSE before ext3), A project of this size either takes a lot of luck and a lot of talent to salvage, or it takes backing with a deep pool of resources to draw from.

    One thing is certain. A swift end to Putin's senseless war in Ukraine and the swift and safe return of the talented developers that helped make this code possible would be the best outcome. How unfortunate if they too are collateral damage from the slowly unfolding nightmare in Ukraine. If the world can find a way to get read-write NTFS into the Linux kernel, odds are good it can find a way to keep the code maintained through these troubled times. Here is to hope on all fronts.

  9. TheGriz
    Megaphone

    Drop in the bucket for Elon Musk

    If Elon would lay out just a 10th of a percent of what he's spending to buy Twitter, this driver would have tons of people working on maintaining it.

    C'mon Elon, your money is better spent on this project anyways.

  10. SVirtan

    The owner replied on May 1

    """

    Hello Linus, Kari and all.

    First and foremost I need to state that active work on NTFS3 driver has never stopped,

    and it was never decided to "orphan" NTFS3. Currently we are still in the middle of the process

    of getting the Kernel.org account. We need to sign our PGP key to move forward, but

    the process is not so clear (will be grateful to get some process desciption), so it is going quite slow

    trying to unravel the topic.

    As for now, we can prepare patches/pull requests through the github,

    and submit them right now (we have quite a bunch of fixes

    for new Kernels support, bugfixes and fstests fixes) -- if Linus approves this approach

    until we set up the proper git.kernel.org repo.

    Also, to clarify this explicitly: in addition to the driver, we're working of ntfs3 utilities as well.

    Overall, nevertheless the NTFS3 development pace has been slowed down a bit for previous couple

    of months, its state is still the same as before: it is fully maintained and being developed.

    And finally, we apologize for late reply; I allowed me short vacation after most restrictions because

    of covid ended up this month in Germany.

    Thanks.

    """"

    https://lore.kernel.org/lkml/0f48e2eb2b0740b1b85e3b8d910c4bd8@paragon-software.com/

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022