it just doesn't add up
"OK, so the sealed boxes have less failures than individual disks, but how do they out-do individual disks in RAID groups? Are the boxes not just RAID groups themselves, and if so how are they magically better than separates?"
you've not missed anything... the drive canisters as described in the story are literately just multiple drives bolted together.. one fails, and another one takes over. data is rebuilt using existing Raid methods.
all I can say is surely this is common sense as to why nobody does this.
say you buy a disk canister. it has four disks in it, only one disk in the canister is used at a time, there are multiple canisters, (we'll say 4) in your array of canisters, that's 16 disks in total, but you can only use 4 at once,
when a drive fails the intelligent canister just shuffles the next in line into service, and you suppose that this makes the disk canisters last four times longer than a traditional disk cause there is four of them.
and let you spend less time maintaining the storage arrays...
except it doesn't.
let's employ a little thing called common sense...
we'll look at some figures,
1 disk costs £100.
so the 4 disks you were planning on putting in your storage array costs £400.
now say you want to use these canisters, you spend £400 per canister (cause it's actually just four disks taped together). you still need four, so you're spending £1600 on disks... (plus whatever extra the hardware costs to recognise disk failure and shuffle the next disk in place along).
Anyway, that's a £1200 over spend on what you could be spending for the same storage,
now in my experience it takes perhaps 30 minutes or to change a disk, and whilst it's not the most skilled job in the world to take out a hot swapable disk and add a new one in, we'll pay then a very high rate.
costs, £100 for the disk, and costs £50 of an engineers time in man hours, (that's £100 per hour that engineer gets, -I want that job!)
your average disk in my experience doesn't actually fail, but we'll ignore the fact that practically everyone has seen servers in commission for five or ten years with no disk failures...
and suggest that every disk will fail in three years.
now we have investment cost,
£400 hardware + £100 engineer time (total 500)
after three years.
£300 hardware + £100 engineer time. (total 900)
after 6 years
£200 hardware + £100 engineer time (total 1200)
after nine years.... well you'll have probably thrown the disk canisters away.
but £100 for disks, + £100 engineers time (total £1400)
notice that I adjusted the price to reflect the fact that your £100 few hundred GB disks of today are worth less in 3 years, and even less in 6 years, (in fact my example showed these disks held their cost price rather well!).
anyway, you've got 12 years of service,
you've used the same amount of disks 16, you've used four times the amount of man hours replacing disks, and yet still saved money?
The point is that these companies didn't invest loads and decide it was a bad idea, then invested loads and decided that people would realise it's too expensive to have four times the amount of hardware as you need sitting idle waiting to be used. especially as the redundant devices may actually go their entire service life ever actually being powered up...
Now if you say that this technology was to be used on a space station, or satellite, when it's not simple a case of a quick trip to a data centre, or into a server room to replace the disk, then, (and only then when the cost of fitting a disk vastly out numbers the cost of the the disk) is it worth buying multiple redundant spares to just sit there waiting for things to go wrong.