WinFS
no resurrection for this then?
Microsoft has unveiled a "state of the art" file system for the next 10 years that builds on NTFS. Named Resilient File System (ReFS), Microsoft's latest baby will be delivered with Windows 8 Server and become the foundation of storage on Windows Clients. ReFS will be used with Windows 8's Storage Spaces, a feature in …
My first thought too.
Strange, though, that I'm sitting here using a browser/email client that lets me tag things to my heart's content, have multiple views on my tagged data, and not show any sign of sluggishness even if I do a full-text content search over 10 years and Gb's of email data stored in multiple accounts. Opera does it all using an on-disk SQLite database to hold all that info and even learns Bayesian-style what labels should apply to what messages.
WinFS was actually a good idea, and one that's never seen the light of day because of junk like alpha-blended clocks. The only decent thing that I've ever looked forward to in an OS and *nobody* has a working implementation on any OS that I've seen, despite evidence that it's more than do-able.
If I wanted resiliency, I wouldn't be storing my data on an OS that can take hours to copy a few Gb's of data because of all the filesystem hooks and security checks it does, or can blue-screen the OS just by tweaking the wrong bit of an NTFS entry. Stop developing the junk and deliver on your promises of over a decade ago.
This post has been deleted by its author
The Cairo project came out in pieces spread out since the early 90's. It never released as a real product as we know. But the WinFS filesystem that is to replace NTFS since NTFS was in version 1? We've all been waiting on that for 20 years now. I personally remember announcements that WinFS being scheduled to release with NT 3.5, NT4, Win2k, and 2008, but hasn't actually been seen yet. Maybe it will really release this time with Win8?
After 20 years, I'm finding myself caring a lot less about it now, but I was pretty excited about it in 1993 though... *sigh*
Windows has always supported third-party file systems. For ext4, I think you want http://www.ext2fsd.com/.
'Course, that's a little different from Microsoft implementing it and making sure it is as thoroughly tested as their own NTFS drivers. An official implementation of ext2, for example, would be a serious threat to their "FAT patent" revenue, as discussed on these forums from time to time.
From the blog: "The NTFS features we have chosen to not support in ReFS are: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, and quotas."
Microsoft never did like hard-links, but presumably there will be complaints from the POSIX crowd about this. As for the rest of the list, "Meh!".
I fear this will contain various chunks of patented bogosity so that people end up storing their data in something that cannot be read except bu buying Microsoft products, or perhaps by paying Microsoft hefty technology licensing fees.
For the same reason I very much doubt you'll ever be able to use ext4, btrfs, etc. on a Microsoft server.
Of course, it only took two or three LKML members being murdered in their sleep by crack teams of Microsoft hashishim, identifiable only by the signature "Windows Flag"-patterned hilts on the daggers they left behind with their victims, for the word to go out: let no one ever release a Linux driver for NTFS. And there's been none, ever since.
how micros~1 manages to tout its own horn exclusively in context of its own earlier "achievements". Everyone else's work is simply irrelevant to them. The kicker, though, is how praising ntfs ignores how their even older "achievement" is still bloody everywhere; it's got serious problems, is not up to the latest technology, and we cannot get rid of it. Oh, and occasionally some vendor gets hit by a patent suit, forcing them to pay royalties on that piece of outdated crap. That is, in a nutshell, the essence of their contributions to the state of the art of computing. Well, isn't that nice. Here's a cookie. Now stop bothering the grownups and let them do the real work, m'dear.
Not wishing to kick a hornets' nest, but surely if it's "still bloody everywhere", has "serious problems", and "is not up to the latest technology" you should be shouting at the fuckwits who keep using it. Also, if they didn't use then they wouldn't have 'to pay royalties on that piece of outdated crap".
No point blaming MS because people are lazy and don't shop around.
Of course, the biggest problem with NTFS (and FAT) is that you can't rename or move a file which is currently in use, and this update won't fix that problem.
"Why is it a problem?", I hear you ask.
Well, if you're doing something such as a software update and you need to replace a core library with a new copy on a running system you can't. You need to reboot the system into a state where this library isn't being used and then replace the file. If, however, you were able to move the old library out of the way, still being accessible by the programs/systems using it, and then replace it you would then not need to reboot the whole system, merely restart those services/programs which use it when it's convenient. i.e. no down-time.
You can rename open files.
Pick some random EXE. Run it. While it's running (and hence the file is open) rename it to something else. No problem.
This works for DLLs too. So you can update a running executable or library, just like you can on Linux.
Whether you can rename an open file or not depends on how the application chose to open it. It's not a Windows limitation.
The purpose of Restart Manager is to allow for transactional changes to open or closed files. Despite its name it is nor primarily about restarting the system. Rather, if a process holds an exe or dll open (because it is running as a service or application), RM can determine which processes to restart. Processes voluntarily registers with RM and they can let RM preserve state (open documents, changes, cursor/scroll positions etc). RM can restart the app/service and bring it into the same state. This beats just replacing files which can easily leave a process in an unknown state (started with version 1.2 and suddenly the libraries it loads dynamically are version 2.0).
RM is the reason system restarts are rare on Windows nowadays.
It is also the reason why *sometimes* the "restart badge" mysteriously disappears from the start button. That happens when RM has determined that files scheduled for replace are being held open by processes which have *not* enlisted with RM (and thus RM must assume it cannot just restart the processes without risk of losing state). RM actually monitors the open files and if they suddenly are closed (because you closed the app) it *will* replace the files en-block and remove the restart badge.
Ever wondered how Windows 7 can start Chrome, Word etc, open the same pages and scroll to the position right before the system was shut down (or lost power)? That's RM working with well-behaved apps.
...was the cry among the developers of AmigaDOS, and thus FFS was born. Some people in Berkeley are under the mistaken impression that it stands for Fast File System.
All other file systems have crap names in comparison.
As for ReFS, if it improves VSS and brings Windows closer to ZFS levels of data integrity, then that's got to be good, right?
Will yet another state-of-the-art FS resolve all the ntfs' fragmentation issues?
I remember that a windows machine would become unbearably slow over some time (there might be registry hell involved as well) . Defragmentation would take hours on well filled drives... This is not the case with ext4 or even earlier ext2/3 fs. ext4 does almost everything on-the-fly, the earlier versions might spend a minute on fsck-ing once every 2 moths with big and full drives.
Another matter is, is it going to be faster, since again ntfs was essentially slower than many other filesystems, I timed the operation from a flashdrive fat32 to ntfs/ext3 once on the dual-boot, but sluggishness of the Win file manager should also be taken into consideration.
Yet again with the ill informed anti Windows rants, where you don't actually get the linux right either...
We have an HP D2D (StorOnce) VTL which runs (effectively) a rebadged RedHat with ext3 - we get access to 80 or 90% of the filesystem, when I was talking to the designers about it they specifically said that if you fill ext3 up to over 90% it is totally crippled in terms of performance, due to fragmentation and not defragmentable if there wasn't enough free space. NTFS isn't defragmentable, either if you don't have enough free space.
NTFS has got better over the different versions, so has ext. How did you do your test, exactly?
>> that if you fill ext3 up to over 90% it is totally crippled in terms of performance,
Do you have any actual references to substantiate this hearsay ?
I myself have never seen or heard of that.
>>How did you do your test, exactly?
I did a copy from/to the fat32 flash drive . It was a few years ago though
Different AC here. AC due to some not externally available docs on the subject.
On a certain Cisco RedHat based product that does a lot of logging (small files that grow big), the fragmentation gets pretty bad for the same reason it gets bad on NTFS. The filesystems are designed for allocate at once, instead of a bunch of tiny files that grow simultaneously to a couple of MB and then spawning more log files. Defrag tools are widely available on NTFS, so the performance issues with the Windows version were relatively easy to fix, but we did have to tell customers who had performance issues on the Linux product to back their systems up and rebuild.
What I found out - frag happens... Linux IS better at preventing it though *most* of the time with normal apps/servers/users, but it isnt' bullet proof on it. Then Microsoft has been telling us fragmentation is a thing of the past since HPFS came out... I don't think there is a file system yet that is impervious if you give it an evil enough application that works against the anti-fragmentation logic in it.
I don't have a reference, other than talking to design engineers of a high-end tape library product. Although most of the UNIX guys that I've worked with add 10% to the required filesystem as a rule of thumb, to allow for this sort of thing. In fact the last company I worked for, a major UK financial, had it as a design requirement.
Regarding your test - if that's the only details you can give, it's not really verifiable as a fair test, is it?
No, the reason I have doubts about "poor performance of ext3" when it is filled over 90% is the fact my old 5-year old system (once upgraded) has a /root partition ext3 my /home dir was also ext3 a couple years ago. It went over 90% quite often thanks to aMule, my love of classical music and BBC/Nature/Nova etc movies. I have never experienced any problems with it. So it might very special circumstances you're referring to. The sluggishness of Windows (XP) is very well known. One of the reasons to be the culprit is the inability of NTFS to not to fragment data on disk as opposed to ext3.
I will search for benchmarks on performance comparisons of ext3/4 vs. ntfs.
Or are others also annoyed at all this bloggy Microsoft Win8 self-promotion that the Reg is passing on? At least, if you are going to do this, qualify it with some reality checks about past Microsoft promises in such areas as spiffy file systems, fast boots, getting rid of the registry, etc...
New Windows versions, whatever their qualities, almost never end up living up to their hype. This isn't about ext4 or an equivalent feature in system xyz being better, it's about keeping Windows expectations realistic.
Win 8 magically has storage spaces, brilliant update mechanisms, now a better-than-anything else FS. Is there a beta preview of this? Show us the money!
I think we need a new term for all this: blogware.
They're a news site. They make us aware of news, which includes what the companies are claiming. No doubt when it's actually released, they will make us aware of both their own opinions on the matter, and on experts' opinions.
News means reporting what's happening, not biasing it with your own opinion... El reg typically separates these quite nicely IMO. hence we get Apple rumours passed on, as well.
But is this new filesystem safer than NTFS? NTFS is not safe, and might corrupt your data, as research shows. Reiserfs, ext, JFS, XFS etc - all those filesystems does not give you data integrity.
ECC RAM is needed because bit flips on random, in RAM. ECC detects and correct those bits. This random bit flip also occurs on disk. NTFS is not designed to detect and correct random bit flips on disk.
ZFS is designed to detect and correct bit flips, and succeeds well, as research shows:
http://en.wikipedia.org/wiki/ZFS#Data_Integrity
ZFS is the first filesystem designed to solve this issue. No other filesystem does target this issue. In some years, other filesystems will follow. But Sun/Oracle is first, and others follow. Just as usual.
Yeah, this might be true, however, my own observations are that ntfs had much poorer handling of hardware problems.
Par exemple, an acquaintance of mine a couple years ago asked to help with her Lenovo laptop running XP. Now I realize that it most probably had a faulty mainboard ( ensuing a sequence of multiple devices failures). When it stopped booting a live ubuntu cd helped to discover it, she had the memory replaced. Then it failed to boot XP again,. The user was sure to previously back up all the data. I booted off a live ubuntu musb thunbdrive . The ntfs partition mounted OK, however 4g-ntfs could not treat it. The user's data were safely copied to my external drive, not from the backup (with ext3 AMOF).
OK, then came the time to try to remedy the partition with the genuine Lenovo or M$ restore tools and reinstall that piece of the art OS, called Windows XP. However, the corresponding backup utility nor fsck tools could do anything there, nor could they see the drive carrying their own ntfs.
No I'd rather not. THe user got an Ubuntu and wiped XP off teh hdd. When another accident happened, the rescue utility e2fsck -yf did the job.
So PoA or PoS, the latter is more expectable from Redmond.
"...The hardware ECC in every drive, however, is [protecting against data corruption]..."
Well, that is not true. If you look at the spec sheet of any Enterprise disk, for instance Fibre Channel disks, or SAS disks you will see typically
"1 irrecoverable error in every 10^16 bits read"
This proves that ECC on disks does not protect against bit rot. The same thing with hardware raid - they do not protect against data corruption. There are no checksums for that.
It proves no such thing. ECC schemes run a checksum on a set of bits. They guarantee that you can't rot one bit without catching it. More sophisticated schemes may catch more than 1 bit rot on that set - but they all have limits in that, if multiple bits rot in the set and that results in a still-valid checksum at the end, you'll have missed the errors. Cleverer people than me, or you, will be able to quantify the probability of not catching the error. Note also that, depending on your acceptable overhead in lost storage, you could have correction of errors, not just detection. Or at least you can on transmission channels, not sure about HDDs.
There's a reason why I pronounce it like it's an STD.
At least on the netapp we are using for our DFS-referred shares, I can do almost anything to a file, unless some program has a file-level lock on it. (disk-level locks are just plain stupid in this day and age, unless it's a drive diagnostic and repair program.)
Um, shall I be the one to tell them that nobody uses NTFS anymore, or has one of you already done the honours?
Local machines don't store much anymore, and nearly all servers are linux running EXT3 or EXT4 or ZFS.
NTFS is far too slow and space wasting - well, it was only an emergency solution so that NT3 and 4 could crash left right and centre without corrupting disk writes anyway.