* Posts by Anubis

1 post • joined 15 Jul 2011

Defragger salesman frags HP

Anubis
Pirate

What about Split I/O's?????

"In order to see no performance degradation from fragmentation you need a controller that supports I/O queuing, out-of-order reads/writes and "knows" where each data block is on any given disk without having to look it up from same. Then it can queue the read/write requests for a disk and satisfy them all with one pass of the actuator across the platters, much like Novell's old hashing and disk I/O elevator algorithms."

@TeeCee...If such a controller exists today, could you give us details................................................??????

Given that SANs are ONLY ever block-level storage, they do NOT know what I/Os relate to what files. A whole mass of separate I/Os writes/reads for fragmented files (which will most certainly be interspersed with other simultaneous data writes/reads) will be non-optimally spread across the disks in the SAN storage pool.

If the controller can do all of the above then yes we DO NOT NEED to Defragment the NTFS which runs over the SAN's proprietary file system, As for NTFS, it will fragment and causes the Windows OS to "split" I/O requests for files sent into the SAN, creating a performance penalty.

One way of addressing the problem is by adding more spindles and spread the I/O's, its a matter of time and as framents increase within NTFS, the problem will come back for sure, you need to defragment the NTFS to keep a check, unless we already have a controller smart enough to make defragmentation obsolete.....

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022