back to article Steptoe storage vendors cash in on junk platters

Disks break - everyone knows that. Yet storage array vendors have rejected a technology that would get around this and save their customers pots of money. This resulted in Seagate writing off millions of dollars, and customers continuing to spend money buying products from storage vendors that could have been much more …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Down

    Tell me about it

    This is what patents are for. So when you get a good idea that supersedes your current business model you can shelve it AND stop anyone else using as well.

    Result? Less of that frightening change stuff for your company, less of that useful value stuff for your customers.

  2. Anonymous Coward
    Anonymous Coward

    I don't get it...

    Granted I haven't bothered to read through all this tripe, but on the face of it, it seems to be asking why HD vendors rejected a technology which would make their drives last much longer....

    Why on earth would a business want their customers to spend less money?

    Isn't this a complete no brainer?? Surely it's better for HD vendors if their customers have to buy new drives every other day than once in a lifetime?!

    Business want to make money, not save customers money....

  3. Graham Marsden

    established vendors...

    ...will tend to reject new technologies that threaten revenues from existing business models without very good reasons.

    See the MPAA, the RIAA, the BPA and a whole bunch of others for more details...!

  4. A J Stiles

    Why we need a change in the law

    This is why we need a change in the law: a sort of "non-consummation" clause for patents. Something to the effect that if a patent has not been actively practised by or under licence from its holder within a reasonable timeframe (2 years?), it should be unceremoniously annulled.

    This would prevent some of the more egregious abuses of the system; such as sitting on a patent merely to prevent anyone else from practising it, or buying up overly-broad patents in the hope that someone may become successful (and vulnerable to a lawsuit) by doing something which appears to infringe upon one of them.

  5. Paul Crawford Silver badge

    Something missing here

    OK, so the sealed boxes have less failures than individual disks, but how do they out-do individual disks in RAID groups? Are the boxes not just RAID groups themselves, and if so how are they magically better than separates?

    With "dual RAID" (over boxes outside, and over discs-in-a-box inside) helps, but at the expense of lots of redundant disks in total, so power and cost (presumably) also goes up.

    My suspicion is there was much less of an advantage to using them than claimed, and if you bought disks and RAID'd them in your own system your total cost for the same storage volume and reliability was going to be much less. Then add to that the likes of IBM/HP charging around £1k list price per disk (so lots of profit here) and you see why they are not such a hot commodity after all.

  6. Steven Jones

    To be charitable

    I feel I have to be charitable here and say that the complete lack of any skepticism on what this technology can actually really do is nothing to do with the journalist having been taken out for a few good lunches and falling for a few PowerPoint slidesets.

    What I read here is a whole lot of speculative stuff which may, or may not have some merit but falls far short of any real provable results. So there is some form of RAID system sealed into a canister and the device controller does the job of the external RAID controller. Maybe, just maybe there's something in their which reduces failure rates by dealing with the engineering details, like the source of vibration.

    However, there - and there is a big however, disks essentially fail because they have moving parts running to incredibly tight tolerances at high speed. Engineer them to a higher standard, and they will fail less often - but they will still fail. They fail for more reasons that vibration - lubrication, manufacturing faults and so on. Further, batches of devices fail early due to differences on the maunfacturing line at the time. Many people will have experienced this on real arrays and servers - several disks failing over a few months. Not too much of a problem with individual drives. The hot spare kicks in, and you plug in a replacement (for many arrays, that's a self-service option). With several drives sealed into a canister, then if you get too high a failure rate, then the canister will need replacing (and the data all has to be moved off unless there is RAID across the canisters).

    No maybe there are some advantages - only one motor to spin all the drives up? Maybe one spindles and set of bearings, but to me it looks more like a RAID-in-a-can and hardly a hug breakthrough and not exactly disruptive technology.

    Ultimate point is that hard drives are fundamentally flawed. Lots of moving parts and performance that does not scale at the same rate as capacity (capacity goes up as the to the square of the increase in linear density, sequential access as to the linear density and random access barely at all). Disks are essentially a bodgy, slow, power-intensive and unreliable storage medium with poor latency and will ultimately get demoted to slow, mass storage over the next few years. The technology explained here makes no significant difference to that.

  7. Tom Maddox Silver badge
    Go

    @PC and SJ

    I've seen the Xiotech presentation on what they do to improve drive reliability, and it's a lot more interesting than you suspect. In essence, one reason that drives fail is because of resonance which occurs as a result of a bunch of drives in a rack all facing the same direction and rotating the same way. The Xiotech canisters are set up to optimize rotational direction and improve air flow to reduce heat (the other big reason drives fail). In addition, the controller software has the capacity to detect when a hard drive is reporting errors, move the data off to a hot spare, and make adjustments to the "failing" drive's configuration to compensate for its condition, essentially remanufacturing the drive on the fly so that it can go back into service.

  8. Anonymous Coward
    Anonymous Coward

    dont blame the hdd vendors

    I dont buy into the service loop practises disrupting new entries into the business. Don't get me wrong, service departments are supposed to be a profitable part of any organisation and at the very least something that helps retain customers, but this is such a small part of the storage business it is unlikely to put a halt on any new tech that would or could be a step change improvement to customers. If you follow the proposed logic in the article you would assume that the RAID designers would create enclosures that were purposefully weak (regarding rotational vibration), causing drive failures to support the profitable service loop.

    In summary, yes drives fail, but not in the numbers that support a service loop commercially strong enough to be a 'barrier to entry' to better performing technology.

  9. Henry Wertz Gold badge

    Some interesting, some not

    So, the extending service life by cancelling out vibrations sounds VERY interesting.

    The "bunch of platters in a large box", on the other hand, does not. OK, this sealed box is more reliable than individual RAID disks --- but, they are not competing against individual disks, they are competing against an array as a whole. If one wanted to, they could make a highly redundant array and not replace disks for 5 years too.

    If possible, one of these companies should come up with an enclosure that can cancel inter-disk vibrations, allowing vendors to plug in their own disks as they are used to, while providing the cancellation to extend disk life.

  10. Wayland Sothcott Bronze badge
    Boffin

    Spinoff

    It would take a small company wanting to get ahead in the market to do this. I would say there is a huge market for long term reliable storage. Something that stores data for at least 25 years. Capacity need not be massive, since drives are already massive.

    Safety goggles to protect against spinoffss.

  11. DR

    it just doesn't add up

    "OK, so the sealed boxes have less failures than individual disks, but how do they out-do individual disks in RAID groups? Are the boxes not just RAID groups themselves, and if so how are they magically better than separates?"

    you've not missed anything... the drive canisters as described in the story are literately just multiple drives bolted together.. one fails, and another one takes over. data is rebuilt using existing Raid methods.

    all I can say is surely this is common sense as to why nobody does this.

    say you buy a disk canister. it has four disks in it, only one disk in the canister is used at a time, there are multiple canisters, (we'll say 4) in your array of canisters, that's 16 disks in total, but you can only use 4 at once,

    when a drive fails the intelligent canister just shuffles the next in line into service, and you suppose that this makes the disk canisters last four times longer than a traditional disk cause there is four of them.

    and let you spend less time maintaining the storage arrays...

    except it doesn't.

    let's employ a little thing called common sense...

    we'll look at some figures,

    1 disk costs £100.

    so the 4 disks you were planning on putting in your storage array costs £400.

    now say you want to use these canisters, you spend £400 per canister (cause it's actually just four disks taped together). you still need four, so you're spending £1600 on disks... (plus whatever extra the hardware costs to recognise disk failure and shuffle the next disk in place along).

    Anyway, that's a £1200 over spend on what you could be spending for the same storage,

    now in my experience it takes perhaps 30 minutes or to change a disk, and whilst it's not the most skilled job in the world to take out a hot swapable disk and add a new one in, we'll pay then a very high rate.

    costs, £100 for the disk, and costs £50 of an engineers time in man hours, (that's £100 per hour that engineer gets, -I want that job!)

    your average disk in my experience doesn't actually fail, but we'll ignore the fact that practically everyone has seen servers in commission for five or ten years with no disk failures...

    and suggest that every disk will fail in three years.

    now we have investment cost,

    £400 hardware + £100 engineer time (total 500)

    after three years.

    £300 hardware + £100 engineer time. (total 900)

    after 6 years

    £200 hardware + £100 engineer time (total 1200)

    after nine years.... well you'll have probably thrown the disk canisters away.

    but £100 for disks, + £100 engineers time (total £1400)

    notice that I adjusted the price to reflect the fact that your £100 few hundred GB disks of today are worth less in 3 years, and even less in 6 years, (in fact my example showed these disks held their cost price rather well!).

    anyway, you've got 12 years of service,

    you've used the same amount of disks 16, you've used four times the amount of man hours replacing disks, and yet still saved money?

    The point is that these companies didn't invest loads and decide it was a bad idea, then invested loads and decided that people would realise it's too expensive to have four times the amount of hardware as you need sitting idle waiting to be used. especially as the redundant devices may actually go their entire service life ever actually being powered up...

    Now if you say that this technology was to be used on a space station, or satellite, when it's not simple a case of a quick trip to a data centre, or into a server room to replace the disk, then, (and only then when the cost of fitting a disk vastly out numbers the cost of the the disk) is it worth buying multiple redundant spares to just sit there waiting for things to go wrong.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2020