There are different ACL
Hi
POSIX has its way to control ACL using ugo (usdr/group/other)
NTFS uses a much more advance ACL
Adopting NTFS to Linux might affecting the way how to use ACL
I am an old linux/Unix person, but admit that Windows ACL is much more flexibel
The next things comes with how I/O are done insider the kernel
XFS has failed to support bigger block size for the filsystem. I guess because the kernel operates in 4K segments
In other words an I/O that are lager than 4K will be choped up to 4K memory segments, placed in a queue to the elevator which tries to glue it together again
I used to compare JFS on AIX where an application used 256K write requests, but the JFS on AIX write to the hard disk was at maximum 80K, to this none fragmented file?
If I instead mount it using SANergy which is similar to newer version of NFS where one can split up meta and data trafic, going to same type of filesystem (JFS on AIX), I got 256K on each write request.
The only explanation I have for this is that all POSIX are most likely performing the same cut-and-paste for I/O wich in my opinion is waste of CPU resources.
Why explaining this here?
Cause when I compared filesystem performance between NTFS and at that time available filesystems, NTFS was far way faster, which I guess is because of that they might not use this cut-and-paste in the kernel.
IF Linux NTFS supports bigger blocks for the filsystem, than I guess this will happen here too?
XFS says (if I remember it correctly) that if one recompile the Linux kernel to use bigger than 4K, than we can support this as well.
But 4K is used by memory allocations etc and that affects may affects many other things too
Porting NTFS that has the same charasteristics isn't as easy as it sounds
Different I/O mechanism, different ACL
Thanks for this article