back to article Defragger salesman frags HP

Defrag software supplier Diskeeper has denounced HP for failing to mention fragmentation issues. Mandeep Birdi, a technical presales consultant at Diskeeper Corp Europe, said: "It's unfortunately becoming a little too common for me to hear from many customers who are adding SANs to their network estate, that their SAN vendor …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Devil

    Nothing new here move along

    We have been hearing this for the last 20 years from _ALL_ storage vendors: "There is no need to optimize this array, this is not the fragmentation you have been looking for".

    So nothing new here, HP is just towing the party line.

    Now do you need 3rd party defrag/optimisation software or not is another matter. IMO there was never any need for it on Linux, Solaris and BSD and the need on Windows went the way of the dinosaurs when all Windows varieties moved to NTFS.

    1. Pascal Monett Silver badge

      Sorry to disagree, but

      Windows XP on NTFS still needed defragmenting. Badly.

      I'm experimenting Win7 now, and I think it may be less important, but I still defrag my disks every month under the principle of precaution.

    2. TeeCee Gold badge
      FAIL

      Dinosaurs?

      "the need on Windows went the way of the dinosaurs when all Windows varieties moved to NTFS."

      Er, bollocks!

      You'll be delighted to hear than that a crufty windows box gets a honking speed hike from a good defrag, both online and (key here) offline for the metadata and registry hives.

      Not too long ago, when filesystems on physical disks was the norm, we used to have to defrag our HP-UX boxes to maintain I/O performance. It's only the advent of the SAN arrays that's put paid to this.

      Anything storing data to a filesystem on a single disk will benefit from defragmentation as ensuring that the actuator gets its blocks sequentially when reading a given file without having to move around all over the platter speeds up things no end.

      Once you use a disk array that supports parallel access, the situation changes. For example, the good old IBM System/38 used to quite deliberately fragment data across all the disks it had available in a given storage pool and returned very respectable I/O performance at the time as a result, so this is nothing new.

      NB: Your cheapshit RAID 0 array doesn't cut it here as that also benefits from a defrag. Reading block 1 from disk 1, block 1 from disk 2, block2 from disk 1 (etc ad nauseam) is still faster than having the heads hop all over the shop on all the disks. However, a fragmented RAID 0 setup will often outperform a defragmented single disk purely due to the benefits of parallel I/O.

      In order to see no performance degradation from fragmentation you need a controller that supports I/O queuing, out-of-order reads/writes and "knows" where each data block is on any given disk without having to look it up from same. Then it can queue the read/write requests for a disk and satisfy them all with one pass of the actuator across the platters, much like Novell's old hashing and disk I/O elevator algorithms.

      I'll just buy a drink for that nice dinosaur at the end of the bar now.

      1. John Smith 19 Gold badge
        Thumb Up

        @TeeCee

        "In order to see no performance degradation from fragmentation you need a controller that supports I/O queuing, out-of-order reads/writes and "knows" where each data block is on any given disk without having to look it up from same. Then it can queue the read/write requests for a disk and satisfy them all with one pass of the actuator across the platters, much like Novell's old hashing and disk I/O elevator algorithms."

        I think you've covered *all* the featuresa controller would need to support to make defragging completely redundant.

        Know you of such a controller and where it might be had?

    3. Ru
      Boffin

      Re: Nothing new here move along

      Though I can't speak for Solaris or BSD, seems like most current and past linux filesystems can indeed become fragmented. There was a defrag tool written for ext2, though such a thing doesn't exist for ext3 but ext4 is intended to include one.

      The ext* filesystems and NTFS are significantly more fragmentation resistant than, say, FAT, but hardly immune.

      1. Robert Carnegie Silver badge

        TheRe's a difference betweene"Fragmentation happens" and "Fragmentation is a problem".

        If your data files are scattered in bits across a volume, do you care? Mostly just if performance suffers. You might also be concerned about data recovery, but that's what a backup is for.

        If you're very happy with your file backup, you can also delete the actual data and then restore the files, which will now be in one piece each, not more. But if you clone i!a fragmented partition, it stays fragmented.

        Now, "MyDefrag", free for personal or non-commercial use (I forget which) on Windows, is written by a guy who also expects to see benefits from putting most-accessed files in low-numbered sectors on the disc. I don't know if he's right, but you could try it with the swap file / page file. His tool also moves other files around on the disk in an attempt to optimize.

        1. John Smith 19 Gold badge
          Coat

          @Robert Carnegie

          ""MyDefrag", free for personal or non-commercial use (I forget which) on Windows, is written by a guy who also expects to see benefits from putting most-accessed files in low-numbered sectors on the disc. "

          Unlikely these days.

          Since SCSI days hard drives map out dud sectors and replace them with undamaged parts of the disk.

          AFAIK the good sectors will be from a block of sectors elsewhere on the disk.

          So you *might* see an improvement if the sectors are where you *think* they are (given the simplest map is likely to be the real layout of the disk), but it's by no means as certain as it might have been decades ago.

          The anorak was acquired when my drives crashed and I looked at copying them, learning *far* more about it than I've ever wanted to know.

  2. James 100
    FAIL

    So ... which is it?

    HP haven't quite got their story straight there, then ... first, the product is "equally" good on fragmented filesystems (so Diskeeper would be a waste of resources), then they say a defragmented filesystem will "further increase the I/O performance", i.e. be faster rather than equal.

  3. Paul Crawford Silver badge

    Before you defrag...

    ...please to a full disk test/scan to make sure you don't have any bad sectors.

  4. Anonymous Coward
    Meh

    Move along...

    ...a glorified sales rep says you need to buy their product.

    Meh...

  5. Anonymous Coward
    Flame

    Hmmm

    I accept that defragmenting will make a difference to single user Windows boxes, although the difference will depend on disk space utilisation. If you are using <50% I would be surprised if you could tell the difference - - if it is >90% it will make a big difference.

    For servers - assuming your EVA is servicing multiple servers and each server services multiple users, I would be VERY surprised if defragmenting provided a noticeable speed improvement for anything other than basic tests that probably don't represent real-world usage situations for an EVA.

    If, on the other hand, Diskeeper are saying that EVA performance is generally pants (particularly at the low end) and that there are better solution out there then I would agree. However I suspect Diskeeper would want to charge for a copy of there software for this not so astounding information - ripping off a customer who has made a poor buying decision EVA) with a snake-oil solution (Diskeeper) doesn't rate highly in my books...

    1. Anonymous Coward
      FAIL

      Makes no odds in the real world

      Defragmenting the file system on an EVA will make very little difference if any, since the data is purposely already evenly distributed across multiple disks within the drive pool, which are all accessed in parallel.

      Defrag is really only there to correct file system design tradeoffs, not the underlying array layout. It's more pertinent to optimising sequential reads on a single disks or small array groups to speed sequential access.

      One of the reasons Netapp have a reallocation scan is because they're overlaying LUN's on a file system (WAFL), which fragments by design.

  6. Anonymous Coward
    FAIL

    Title required, would be nice if it auto filled based on article title but meh....

    Any Windows based file server that is serving large numbers of files needs regular defragmentation to speed both user access and backup performance.

    Even and EVA or 3Par array with its wide stripe technology will benefit from a defrag as pulling a single file off of the least number of stripes will always benefit the performance. Less seeks is ALWAYS faster. In an EVA if a file is fragmented across many stripes then the number of seeks increases at an alarming rate and a heavily fragmented file systems with millions of files will slow down the entire disk group.

    HP need to pull their collective head out of their A$$ and start telling their customers the truth.

    1. Anonymous Coward
      FAIL

      Title required, would be nice if it auto filled based on article title but meh...

      But the point is, HP don't have a defrag issue, the overlaying file system does. Be that NTFS or whatever, aiming this article solely at HP is disingenuous at best, you could equally apply this to EMC or HDS. Not to mention WAFL and ZFS interaction with overlying file system defrags.

  7. JimboJones
    Thumb Down

    Defrag your SAN?

    If you use thin provisioning then you don't defrag your SAN volumes! The free space gets defragged and you volume becomes full on your SAN, negating the thin provisioning benefits :)

  8. Pink Duck
    FAIL

    Caution

    I advise a full system image clone before touching Diskeeper, due to fond memories of it corrupting NTFS’ Master File Table.

    1. The Infamous Grouse
      Alien

      Diskeeper Corporation

      Can't say I ever had a problem with the Diskeeper software when I used it back in my Win2000 / early XP days.

      However, since I stopped using it, I have had a problem trying to get an e-mail address removed from their mailing list. I think four months has been the longest successful gap before they've started up again.

      I hadn't realised they had connections to Scientology. Sort of explains the unwillingness to truly let me go, I suppose.

    2. Tom 13

      I've never had a problem with it corrupting the tables,

      but the damn thing kept running all the time even when I was trying to use the PC and bolluxing up my ability to access files. After I yanked it things ran better.

      Yes, it is an underpowered Lenovo and at the time it had Vista on it. But given those are the circumstances under which Diskeeper claim to improve performance....

  9. Anonymous Coward
    Alien

    The number one reason NOT to use Diskkeeper...

    ...is that it is owned and run by a bunch of Scientolgist, who like force employees to take training in their cult.

  10. Anonymous Coward
    WTF?

    Every Office Worker Outside The IT Department...

    ... just KNOWS that defragging their disk is sure to increase the performance of their PC by at least 300%. Even those who access all their data from network servers.

  11. Geoff Heaton

    WIndows 7

    @Pascal Monett - By default, Win 7 runs a defrag job (%windir%\system32\defrag.exe -c) every Wednesday at 01:00.

  12. markusgarvey
    Trollface

    Diskeeper/Scientology

    TIL: Scientology owns Diskeeper...it's also a waste of money....windows defragger IS Diskeeper, just a stripped down version...

    http://en.wikipedia.org/wiki/Diskeeper_Corporation

  13. Anubis
    Pirate

    What about Split I/O's?????

    "In order to see no performance degradation from fragmentation you need a controller that supports I/O queuing, out-of-order reads/writes and "knows" where each data block is on any given disk without having to look it up from same. Then it can queue the read/write requests for a disk and satisfy them all with one pass of the actuator across the platters, much like Novell's old hashing and disk I/O elevator algorithms."

    @TeeCee...If such a controller exists today, could you give us details................................................??????

    Given that SANs are ONLY ever block-level storage, they do NOT know what I/Os relate to what files. A whole mass of separate I/Os writes/reads for fragmented files (which will most certainly be interspersed with other simultaneous data writes/reads) will be non-optimally spread across the disks in the SAN storage pool.

    If the controller can do all of the above then yes we DO NOT NEED to Defragment the NTFS which runs over the SAN's proprietary file system, As for NTFS, it will fragment and causes the Windows OS to "split" I/O requests for files sent into the SAN, creating a performance penalty.

    One way of addressing the problem is by adding more spindles and spread the I/O's, its a matter of time and as framents increase within NTFS, the problem will come back for sure, you need to defragment the NTFS to keep a check, unless we already have a controller smart enough to make defragmentation obsolete.....

  14. mmaterie
    Facepalm

    Reponse and apology from Diskeeper Corp

    I’m the VP of Product Management at Diskeeper Corporation and would like to hopefully clear up some confusion.

    First off I must apologize to HP and the knowledgeable and ethical employees of HP. Any inferred assertion that HP as a company, a great partner of Diskeeper Corporation, is misleading customers is very unprofessional and simply inaccurate. Corrective actions are already underway within Diskeeper Corp so that this does not occur in the future.

    It is not the view of Diskeeper Corporation that HP, or any other SAN vendor, has any responsibility to educate their customer about an issue that is not of their causing.

    Therein lies what may well be a misunderstanding, which the representative from HP is helping correct.

    SAN solutions can employ proprietary block or file management systems, and fragmentation can occur at this level, the same as it can in the operating system’s (e.g. Windows) file system. HP is making it clear they do not have this particular issue in their SAN.

    Our expertise and solutions are designed exclusively to optimize performance of a general purpose operating system. The HP rep stated this on the subject:

    “A defragmentation application, such as the one your message referenced [Diskeeper], provides a contiguous file reappearance to previously fragmented file systems. This works well with the EVA platform – just as with other platforms or LUNs in general.”

    and...

    "For sequential data accesses, defragmentation can help to increase performance – especially when combined with sophisticated techniques that the EVA provides."

    Due to our poor communication, there’s been a great deal of unwarranted controversy generated between vendors, readers, and the media when the vendors actually AGREE that defragmenting the OS file system(s) is a good thing!

    For your time and attention to this, my apologies to HP, Chris Mellor, and readers of The Register.

    Michael Materie

    Diskeeper Corporation

    1. UKHobo

      I'm so happy now

      "Due to our poor communication, there’s been a great deal of unwarranted controversy generated between vendors, readers, and the media when the vendors actually AGREE that defragmenting the OS file system(s) is a good thing!"

      Thanks for clearing that up for me Mr Diskeeper fraggle expert.

This topic is closed for new posts.

Other stories you might like