Interesting
Ive always believed that vibration may be a source of pleasure for some, but causes premature failure in hard discs
Everyday background vibration in data centre drive arrays can slow drive random read performance by up to 246 per cent. Stiffer drive racks prevent this happening and make I/O-dependent apps go faster. Storage consultant Robin Harris pointed us towards the IEEE paper Effects of Data Center Vibration on Compute System …
100% (stop) then 100% (reverse - ie Write) then 50% slow read!
Isn't it knobvious!
Why do people insist on using % like this? Like saying if you take the Pill perfectly you have a 0.1% chance of becoming pregnant. In normal, imperfect use it is 5%. A 5000% increase? NO! a 4.9% increase.
Argh.
(I reserve the right to be totally wrong!)
It comes from the study author using the slower speed as the denominator, which shows the difference as a higher percentage. This is the correct method for describing the increase in speed from a slower solution to a faster one.
However, it is incorrect to use this to describe a decrease in speed from a faster solution to a slower one. The slow-down should be expressed as a percentage of the faster solution, which in this case would be about 71%.
(The denominator in a percentage should always be the number you're comparing against; in the case of a slow-down, you're comparing against the faster speed, in the case of a speed-up, you're comparing against the slower speed. This principle is why when stocks go up 50% it's never as much as when they go down 50%.)
I wonder if the mysterious latent performance effect is something to do with drive or controller-level caches? I'm guessing, but could re-tried reads cause extra cache to be consumed, thereby reducing the cache's effectiveness on the subsequent run? Additionally, I suspect that many of the successful reads were failures straight off the head but software correction saved them - could this cause more cache (or some other resource) to be consumed and affect a subsequent run? Regardless, the paper is light on cache effects at any level, which I find a little surprising.
...caused by the previous run's writes being less well-written, so when reading back, more prone to random ECC failures?
Unless they were properly isolating the tests by never ever reading a block which was not first written in the same test session.
Reminds me of the "Disk Fault 18" problem on the Beeb's floppies. Caused by the drive going overspeed briefly during startup. If you were really unlucky the block you wanted to write was right there at the instant it came ready, you write it, and because the disk is overspeed, the write spills over into the header of the next sector. It took a lot of pain to find that.
So mounting drives to heavy immovable structures increases their read-write speeds, so what about all those drive mounting kits that reduce noise vibration through rubber mounts, are those useless?
The 3 drives in one of my PCs are mounted with Zalman heatpipe coolers, they work well at reducing the drive's heat but do the rubber mounts make the read-write speeds worse? is it bad for the drive?
Some quiet PC enclosures have vibration isolation - rubber grommets or springs - on the disk mounts, to prevent the noise of the disk drive coupling into the case and causing acoustic annoyance. Such isolation must also work in the other direction, and attenuate any forced vibration of the disk drive by the chassis. Some benchmarking please. Are quiet PCs also faster PCs?
Anyway, if drives being vibrated by their chassis is a problem, it's surely a much better solution to float each drive on vibration-deadening rubber mounts, than spending a fortune on heavyweight ironmongery.
The thing that I know does cause systems to slow down is being too warm (or cold). A drive outside its manufacturer's recommended operating temperature range may have a shortened life expectancy, but definitely suffers degraded seek times while it is operating outside spec.
When I shout at things, they tend to start performing better (up an infinite percentage from zero).
Your suggestion to isolate drives individually may have merit, but this would take up room that enclosure manufacturers are trying hard to save. OTOH, I can build a sturdy rack with the same footprint as a wobbly one; it'll just be a whole lot heavier (and more expensive).
But apparently prof Brian Cox needs to do a TV program to explain it to the average viewer.
ideally you want the drive acoustically isolated from a massive (concrete block?) support which is in turn acoustically isolated from the environment (including other drives).
As regards performance carry over from one run to the next: making a profit out of hard drives is all about pushing the limits, so for example you have a high margin 2TB drive for months while competitors are still on a low margin 1TB version of essentially the same collection of components. The problem is, reliability is of the essence, so having pushed the limits, you have to adaptively and dynamically pull back when it's not working.
As a result there are all sorts of changes to the way the controller operates the mechanism, driven by slowly varying parameters related to temperature, time since power on, and error rate, and by recalibrations done more often during environmental changes.
Every time the heads move to a new cylinder, there is a settle time delay for the heads to stop waving about before starting to read, so as to avoid read errors. This may well be the main variable in differences in tested performance. So performance at the beginning of one run is initially identical to the end of the previous one (settle times unchanged), and only after a learning period does the drive adapt settle time allowances to the new environmental parameters.
The storage industry has managed to convince their punters to overspend on _everything_.
Unnecessary OM3 patch leads for 10 metre cable runs, FibreChannel HDD instead of better caching, lossless fibrechannel switches instead of well designed protocols.
Now they are going to want unobtainum cored tungsten steel racks as well.
More money on storage goes up the cry.