Am I missing something?
My box at home is Fedora 29 and Suse Leap15 and both Linuxes can write to NTFS disks just fine.
Love it or hate it, Linux users in a Windows world must deal with Microsoft's New Technology File System (NTFS). This has always been a pain in the rump. Even after Microsoft finally gave up on its anti-Linux rhetoric and released its patents to the open-source community and expressively opened up its exFAT patents, we still …
It depends. Maybe you're using something I don't know about, as I try not to write to NTFS very often, but if you're using the traditional methods, you'll either get something which works but slowly or something which balks when presented with unusual drives. Another restriction on both is that they often can't deal with a NTFS disk that wasn't cleanly written and unmounted, which used to be normal when Windows shut down, but now doesn't happen by default unless the user either restarts or changes a relatively hidden setting. Supposedly, this version should run fast and support that, so while I don't need it now, I'll welcome it.
One of my boxes at home (this one) is also Fedora 29 (yeah, ought to update it I know) but last year I added an NTFS drive from an older windoze system - although I expected some problems and had cloned the drive first anyway, it worked straight off and has done ever since.
I'll be creating a new box with a new Fedora install soon, so I'll be importing stuff from this box (and getting rid of the NTFS drive) to speed things up a little.
The article seems to have been written against a background of earlier Linux experiences maybe.
"NTFS-3G, which works with the Filesystem in Userspace (FUSE), is slow, really, really slow."
Seems fast enough here!
"On Linux, it also can only read from NTFS systems."
Strange, because my Linux system (Slackware64) seems unaware of this limitation, and writes quite happily to NTFS drives.
I think you are confusing the FUSE implementation with the old, native kernel NTFS driver, which was extremely limited as well as being very slow.
NTFS-3G might not be fast by industrial standards, but its fine for domestic and small office use.
Yeah, the author probably is mixing up things. That very paragraph quoted from the article goes on to cite the Arch WIki NTFS-3G article. The very first line states:
"NTFS-3G is an open source implementation of Microsoft NTFS that includes read and write support (the Linux kernel only supports reading NTFS)."
One of the issues I've found is on a dual-boot system it's best to switch off Windows 10 fast-startup option/suspend, have the system shutdown properly between reboots, and not leave the filesystem in a suspended fast boot sleep state. (i.e. so it boots up quicker back into Windows on a restart).
In truth, Microsoft should (in 2021) play safe and switch off 'fast-startup' by default if it can see there are other partitions on the system disk, that could potentially boot another operating system (i.e. Linux) and read the Windows partition while it is in this 'iffy' fast-startup suspended state, but Microsoft developers still think there will only ever be one operating system running on a laptop/desktop, and that's Windows.
(Same can be said too, running and using Windows 7's (older version of) chkdsk on a Windows 10 system partition that sits alongside it on the system disk, it'll end in tears. Though, it's fine to use (newer version of) chkdsk from Windows 10 on a Windows 7 system partition).
In truth, Microsoft should (in 2021) play safe and switch off 'fast-startup' by default if it can see there are other partitions on the system disk
But that would mean them actually accepting that there might, possibly, in some parallel universe be some oddball who doesn't think computers exist for the sole purpose of running Windoze. OK, a bit of exaggeration for effect, but basically MS don't care if someone wants to try and run a different OS - any problems involved in doing that are nothing to do with them and there's no benefit to them of taking such weird behaviour into account when deciding what their OS should do.
Logically, they could easily have given Windoze full read/write capability for a variety of "foreign" file systems. But they didn't do that for any of them for the simple reason that it suited them, and still does, to make it as hard as possible for anyone to stray from the Windoze ecosystem.
I've never seen the point in Fast startup or Hibernate, first thing I disable on my personal machines.
Agreed, though I have seen some people close laptop lids, pack in briefcase, then re-open in another spot and "just keep going". I prefer doing proper shutdowns and restarts.
in yon olden times, before covid, i used to really rely on the "just keep going" feature for my commute. 25 minutes on a boat. tap tap tap. close laptop and get on a bus. tap tap tap" 90 minute later, close laptop and hoof to work. plug in power, and tap tap tap.
sure, booting doesn't take that long (full-disk encryption notwithstanding), but it's still 3 minutes. but finding where you left off does. even if every application is fully stateful, you'll have web pages that need to reauthenticate, and maybe a vpn that needs new creds. and the phone tethering. it just goes on.
@MrREynolds2U Is this fast? I thought it was just the new standard having just replaced my creaking dualcore 1.8GHz Intel homebrew, which despite being 14 years old only took 25 secs (still quicker than original posters 3minutes). I do remember that Windows did seem to take a boringly long time to start and was one of the reasons I eventually gave up on it around 8 years ago.
That time is to get to LightDM login. It is an Asus ROG Zephyrus G14 with a bog standard LinuxMint 20.4 XFCE install apart from the NVidia driver. (my eyesight is no longer up to reading 2560x1440 on a 14inch screen which comes up with the nouveau driver)
Does that make it a Ferrari? I just bought what I thought looked liked a reasonably laptop on which I could do a bit of FreeCAD designing as well as running some photograph manipulation without having to wait for diskswapping which I have turned off.
I don't know exactly how X orders all its sub processes but I believe X is running long before the login manager.
It is running an Athlon Ryzen 9 with 16GB and no swap file or partition. I did change the standard 256GB Samsung SSD to a more useful 2TB but I don't the speed changed by more than about 5%.
journalctl -b shows that that in fact lightdm is ready for user login after only 5 secs but I think there is some stuff that happens between grub accepting the choice of Mint and when the real kernel starts to get loaded.
"I've never seen the point in Fast startup or Hibernate"
The point isn't as much the startup speed as to have all the required windows / applications already open with the files I'm working on already loaded and ready to go.
I know that efforts have been made to re-open closed apps whenever they are still open on shutdown but it's still too patchy for my liking.
I suppose the problem for me is I used to be a heavy Lotus Notes and Opera (Presto) user, many years ago so just let the OS close both and re-open where I left off. Outlook has only recently introduced the feature (I'll be enabling once helldesk installs) and Edge I've enabled the function to re-open websites.
I’ve actually had to re-enable hibernation recently because of the abomination that is Modern Standby being forced down my gullet by Microsoft/Lenovo/Intel. If I want my applications/open documents up and running after I lift the lid, it is now the only option that reliably avoids turning my laptop bag into a disconcertingly hot oven.
NTFS is a shit-show. I can only warn users to rely on it. It once wiped (I had a backup) around 500 MB after a filesystem check – and that was entirely Windows. NTFS is like everything Microsoft creates. It piles up trash and eventually explodes, forcing you to restart from zero.
If you are mixing Windows and Linux with NTFS, you may want to run the following in a Linux shell:
sudo find / -iname '.fuse_hidden*' 2> /dev/null
Those ".fuse_hidden" files may appear when you delete a file on NTFS under Linux and a process still holds a handle on it. In other words, those are "failures to delete".
Is this not a symptom of the fundamental incompatability between philosophies regarding file operations on files that are being referenced in some way or other by other processes in a *nix environment, and the philosophy regarding the same in windows?
NTFS is designed with the latter philosophy in mind, use within an environment using the former philosophy is bound to cause some wrinkles...
When I finally nuked my Windows 7 partition and went Linux full time some years ago, I installed NTFS-3G in order to make a backup of /home with rsync in an external USB HDD that had a NTFS partition.
The file permissions were all messed up, so, huh....according to the Arch wiki page referred here in the forum there is a way to preserve permissions, but now I am thinking, wouldn't it be better to just get a new external HDD and format it with ext4?
@SJV-N:
... the idea that Microsoft could simply open-source, say, all of Windows 7 – it's often badly written.
Hmm ...
... the idea that Microsoft could simply open-source, say, all of Windows 7 – it's often badly written.
There you go.
Now it adjusts to reality.
Hence the necessary warning ^^^ .
O.
I used to work for a large software company that was very protective of its "trade secrets". Some groups were so paranoid that they did not even use the corporate central source management control system, and refused to answer specific questions about how their code worked. I had to use a de-compiler to figure out what they were doing so that my software could properly interface with it.
The code from other groups that I *did* see was very unprofessional. "Comments? We don't need no stinking comments! The code is its own documentation." I saw one place where basic list searching algorithms were written in such a way that the "time complexity" (the Big O number) was exponential where it did not have to be. No code reviews at all.
I chortled because I say exactly this behavior at MSFT back in 2002. My team was down the hallway from the top Office people. The comments they openly had about the OS folks was hilarious (or sad, dependent on your perspective.) I think the Office people got along with Apple MacOS people better than the Windows crew.
I dunno, NTFS has been fine for me for, oh, nearly 30 years. The problem really is that it's now been fine for me for NEARLY 30 YEARS.
A lot of the 'powerful features' it introduced in Windows 3.1 times are looking pretty petty these days. Even MS's own doco for it testifies to it's age, advising that it's the best choice for 'disks over 400MB in size'.... at a time when I'm ordering 10TB drives for my home desktop. Things that seemed like generous limits in 1993 (16 exabyte volume size; severe performance degradation when you go over a few thousand files; 255 character ile name limits) are feeling a lot more restrictive in 2021.
Not exactly FILE NAME limits, but if the FOLDER PATH is longer than 255 digits... ohhh a lot of interesting things happen.
Specially over networked folders.
I remember that 255 digit on the folder path being a problem, but I don't know if it was eventually fixed with Windows 10... (bahahahah, probably not).
It's been some good 10 years since I bumped into this issue.
Users wlth extremely long double barreled names, that couldn't even think of shorting their double barrel name... wouldn't hear of it when I mentioned they would have to type that bugger in every time they wanted authentication?
Also will add that it's been a while since I had this issue. Then again, my windows user base keeps shrinking!
@Luiz Abdala
...but if the FOLDER PATH is longer than 255 digits...
You don't want to know how hard it was to explain THAT to users. This issue caused me no end of trouble - repeatedly.
The one issue that took me a long time to resolve concerned an inability to attach files from shared drives on a document server, to an e-mail (Outlook/MS Exchange 5).
I eventually twigged that it was related to a filename being too long (as copying it to a local drive and then attaching it to the message worked like a charm), but the problem was that the whole path was "only" about 170 characters long.
Experimentation led to the discovery that Outlook could not attach files if the filename was more than 112 characters (I think - twenty years ago and the old memory is not what it used to be) in length.
Icon for MS and its illogical inconsistencies.
Outlook .PST files longer than 2GB.
Woe betide thee, if you had to help someone when their PST file would no longer open.
I learned that creating another PST file and asking the pleb to move their emails over would force the crud to stay back, instead of compacting (removing the deleted) mails like you were supposed to do, nestled behind 4 levels of obscure menus in Outlook 2000.
Back when servers (Exchange, hurrrl) were not supposed to hold all your junk and had a 20MB limit, and you were meant to save the emails on the local hard drive.
The special folks that had FIVE PST folders, I instructed them to run the Outlook compactor, because these had to be a more tech savvy to do their jobs, which in turn got them off my back.
I would like to return to 8.3 file names. So people would stop putting metadata into the filename, as in requirements_project_121212331_draft_doc_code_1231GGGGAA_20211015T1222_v_1_1_new_reviewed_by_tom_needs_approval_by_alice.xlsx. And shuffling them around by email, instead of using a properly versioned and tracked repository...
Returning to 8888888.333 would be a disaster of course. Rather it is a matter of users common sense - which term seems to have dropped from the dictionary: as an engineer I recently received a file named "not_the_final_version_of_yagi-1250mhz-optimizing-process-table_just_for_reference_and_not_checked_so_do_not_use_Brian.xls". I have no idea how this Brian searches for or keeps track of his project files - or how he organises his documents!
You once lost 500 MB of files after a filesystem check. Therefore NTFS is a shit show. Quod erat demonstrandum! Except, the only thing you've demonstrated is sloppy thinking and confirmation bias. Either that, or you're just making it up.
NTFS is a rock solid and reliable file system, proven in the real world for over 30 years. I've been working with it for 20 odd of those and have never seen a problem similar to the one you describe. I have seen such problems with other file systems, though unlike you, I don't take that as an indication that they are shit shows. It would be presumptive and down right silly to decide so on such isolated and limited evidence.
I get that people like to bash Microsoft, I do it regularly myself, but at least base your claims on facts and sensible thinking. You don't offer any relevant facts here and it's clear you have no idea about NTFS. The issue you described was almost certainly down to something else, the two likeliest candidates being a hardware / driver / firmware issue or user error. Of course, you know you will get likes on here by slagging off anything to to with MS, and that must be tempting. But for most of us readers the baseless, repetitive MS bashing, is boring, childish and often betrays a profound ignorance on the subject in question. Sadly, that is the case here: you are simply wrong about NTFS.
I'm not claiming NTFS it's better than X, Y or Z file systems. I'm not saying MS and Windows are great, or ext4 is crap or anything else. I am just saying it is extremely reliable and robust and is proven to be so. To claim anything else is just not true.
Right..... erm... OK? And that's your best rebuttal? Wahhh! Microsoft bad, NTFS crap... wahhh! Sarcasm. Wahhh. Just doesn't cut it fella.
I never even said you were the only one, or that you are not aware of world events (even though that's totally irrelevant). I said you don't know what you are talking about regarding NTFS, and that your understanding of the scientific method is, well, unscientific.
Your original comment was ill informed, unsubstantiated and poorly reasoned, i.e. pointless. Your reply was even worse, and your argument remains baseless. Either give us some substance or go home.
NTFS was written with Windows in mind (obviously) and may well make assumptions about how it's used by the OS. Linux might not do stuff quite the same way and could well expose weaknesses that have been "fixed" by changes to Windows rather than to the NTFS driver. I am happy to let others find these bugs for me before I use NTFS from my Linux system, not that I'd bother, given that here it's mostly ext4 and an instance of zfs on my file server.
Far more use would be a solid implementation of ext4 on Windows.
Native EXT* support in windows would be a fantastic addition to the OS. Now that we have one, why not both?
Being able to mount and make small corrections to a VHD in situ on either unix or windows systems would be pretty handy. I have had to wrangle iSCSI attached volumes served both ways(unix backe storage served to windows and Windows storage spaces serving Linux VHDs back the other way.
Being able to do things like compressing volumes, incremental backups, and fixing boot problems are all a real PITA when you have to copy the VHD image off the server to mount it R/W reliably.
I have managed thousands of Windows servers and desktops starting with NT 3.5, and I have found NTFS to be rock-solid since Windows 2000 came out (less so during the Windows NT days). I'm sorry your shitty 500 MB drive crapped out that one time, though.
For reference, that was also the heyday of ext2, which was utter crap. I recall plenty of times restarting Linux boxen and having them run endlessly through fsck. I have run Linux, Solaris, and FreeBSD servers on various filesystems, including ext2, 3, 4, UFS, XFS, ZFS, etc., and I recall countless issues with filesystem corruption in the nineties and noughties. Arguably, it took until ext4 for the default Linux filesystem to be reasonably stable and reliable.
Go ahead, flame away, penguinistas. I said it: your baby was an ugly duckling.
what is especially important is a reliable file system recovery DVD that boots into live Linux and then lets you fix things, do backups and restores (without windows interfering, as with the registry) and things of that nature. I DL'd such an image a few years ago and it had an alpha-quality NTFS driver, but something that's part of the kernel and "blessed" makes this more practical.
Maybe not bullshit. I recently bought a 256 GB microSD card preformatted as exFAT. Windows and Linux read it fine, but my 2020 Macbook Air (Big Sur) won't mount it. The message is "[!] The disk you attached was not readable by this computer. [Ignore][Eject]". I find that pretty annoying.
One of their many “brilliant” ideas – alternate data streams. I quote from the first entry coming up in Google:
NTFS ADS Viruses - Computer Knowledge
Feb 28, 2013 ... The NT File System allows alternate data streams to exist attached to files but invisible to some file-handling utilities.
Actually they were useful to attach metadata to a file which were carried along with it (only as long as the file system supported it, of course), and is no different from what Apple did with its resource fork.
The problem was that most tool written to work on non-NTFS file system kept on ignoring them - and parts of Windows itself as well - including Explorer.
So they could be easily used to hide information in a file.
> and parts of Windows itself as well - including Explorer.
Yes, that was an idiotic decision. Imagine being able to attach a post-it note to a file so that it showed with a little yellow badge in Explorer and displayed when you right-clicked or hovered or whatever. Immensely practical and clearly useful for any size or type of business - but no, Microsoft knows best. Thou shalt use SharePoint and pay through the nose for shit functionality built on an expensive to licence database.
It cannot (and did not) work that way. Being able to store arbitrary “metadata” next to arbitrary files cannot work. When you got an image, it might contain metadata like exposure, location, etc. But what do you do with “exposure” on your MP3 – and how are locations stored?
They didn't think this through at all and practically gave it up right away by not supporting shit. ;-)
You misunderstand. The other data streams in the file were always intended for arbitrary data i.e. you could store anything. Mostly they were used for metadata e.g. a picture's EXIF as you say. But, in exactly the same way you expect an image viewer to understand JPEG but not understand XLSX, so the metadata could only be understood by the app that created it.
The missed opportunity from Microsoft was not to define some standard metadata items that were applicable to all file types and make these readily accessible from Explorer - for example a free text comment so the files author could leave a note for others that remained attached to the file.
That was my point. With every application being left to write out its own crap, this was bound to fail. If you think about making it a standard, then it becomes clear that you might as well add the option to add the metadata to the file to begin with (version increase). This was essentially done. It has the benefit that the metadata doesn't disappear when you copy it to a different filesystem. ;-)
Not always, because a file format standard may not be easily changed by Microsoft - or an application. The alternate streams does allow to augment file data without any need to modify the original file.
For example many non-destructive image editing application use sidecar files to store changes - you need to move them with the file to keep them - if they were in an alternate stream you'd just need to copy a single file.
You mean Microsoft's equivalent to Apple's HFS Resource Forks? From 1985?
It's amazing, a company tries to support advanced functionality and because everyone else can only think in flat files the advanced functionality is a mistake.
It's like UNIX "everything's a stream so just hack at the bytes" vs NT "everything's an object and you should use methods to handle them" - idiots try to hack at NT objects because that's the lowest effort option and get surprised when they blow up in their face, so that's *obviously* Microsoft's mistake.
My guess is that ADS was originally intended to be a VMS like file versioning tool, but was never implemented. Later, it was repurposed to support appletalk file sharing with Services for Macintosh. Even later, it was relegated to use as metadata storage to tag files as having been downloaded by Internet Explorer.
There's a lot of weird nooks and crannies in Windows that seem like they were intended to be useful, but were never completed. Symbolic links, hard links, mount points. At one point they had most of a Heirarchial Storage System integrated, using tape libraries and ntbackup, but I think it's completely bit-rotted away by now.
Just because NTFS is in Linux doesn't mean that NTFS in Linux is any good. There are infinite possibilities for how content and meta content is arranged on the disk. I've got NTFS disks in use with existing drivers on the old kernels, and meta information, which under Windows would be placed in blocks, is scattered like stars in the sky, and just as ineffable. Because this meta information, of various types, refers to and is referred to by direct addressing, it is unmovable except by the file system, not under user control