
So we'll have full height drives and no full height bays?
Glad I still have my XT with the full height drive bay. Can't wait to hook 4TB to my 8088.
Hard disk drive suppliers are looking to add platters to increase capacity because of the expensive and difficult transition to next-generation recording technology. There are two candidates to replace the current Perpendicular Magnetic Recording (PMR) technology. The first is Heat-Assisted Magnetic Recording (HAMR) and the …
One has to plan ahead and find drive enclosures for these puppies!
The 12.5mm drives and some limited equipment is out there, it sounds like this will become the minimum form factor.
At a guess if you wanted a 2.5TB drive in the 2.5" form factor, that puppy is likely to be 25mm thick or 1 inch, with 8 platters.
It would be nice, no VERY nice to see SSDs getting more popular, with the emphasis on CHEAPER as they sell millions of them, but doesn't flash memory still suffer from a maximum number of writes being much lower than magnetic disks? I know there is technology that can optimise them by shuffling around the data that is written/changed most often so it gets moved to occupy different areas of the memory that weren't being used so much, but that can only help so much.
SSDs typically cost 15-20x the cost per Gb that HDD do. that's a lot of ground to make up. Plus SSD have a relatively short lifespan if used in an environment with a lot or read/write cycles, so they need to address that as well as cost before they are suitable for every situation.
It'll be a while yet before Terabytes of SSD will be price-competitive with magnetic storage, even if Moore's law stays on track for it to happen eventually. There are physical reasons to suspect it can't ever happen, but those only apply to extrapolation of current SSD technology, and I wouldn't be very surprised if someone didn't find a better way over the next decade or two.
By which time, magnetic storage will be up into the tens or hundreds of Tb, using the technologies described in the article, that are already working in the lab.
What will we be storing on it? There's always the mousetrap risk (build a better mousetrap, and the world does NOT beat a path to your door! )
Agreed. The biggest problem with SSDs now is that shrinking them with the current materials and process makes them less reliable. There are big problems with write endurance and unpowered data retention times that get worse each process shrink. I believe for SSD to catch magnetic disks for price and size there needs to be new materials used.
Height of drive isn't just the platters, there's got to be space between them for the heads.
Heads only need access to one side of the disk, so currently there's a load of space between the platters that nothing ever actually fills.
How about putting 2 stacks of 2.5" platters into a single 3.5" enclosure - then offset the platters so they interleave.
Idea not thought through at all. Think it'd fit. Overall platter size is ~ the same (OK, I'm assuming whole surface is writeable) - but you've now got something with 2 independent sets of heads and a lower edge speed. Could either treat it as two physical drives, or just bung a raid controller in.
When the platters gets bigger you get vibration problems.
3.5" is the practical limit with modern capacities, and they're struggling even then.
If you want good read/write speeds out of them they have to spin at ridiculous speeds...
(And the cost for 10 or 15K RPM disks is ridiculous... )
My first disk was a monster, the size of a washing machine with a 3-phase supply, ten fixed fourteen inch platters and one removable. The head stack was driven by a 1kW voice-coil motor, a 4 inch diameter coil on runners.
Capacity was a whopping 10.7 Megabytes.
Enough to store nearly all the information the state had on all its individuals.
Same here. I can't remember the model, but it was a Burroughs. Looked like the B90 shown here.
http://www.picklesnet.com/burroughs/gallery/bpgb90.htm.
The requirement for the 3-phase supply was a combination of the size of the motor to turn the platters, and the pneumatic system to throw the heads away from the disk surface upon detection of the heads getting too close to the surface.
Those were the days.
It's all very well increasing density per drive, but if the spin speed stays the same then the overall performance per GB dramatically decreases. In the enterprise world there will be a lot of vendors selling more capacity than a customer needs in order to get the required IOps.
Of course, this all goes away once SSDs become bigger and cheaper ... but that's probably a few years away.
...more heads.
If every platter had twice as many heads, you could double the read/write speed.
Maybe it's time for heads mounted on fixed arms that bisect the platter, and just use linear motors to move them in/out across the platter surface.
You'd also halve access time, as nowhere on the platter would ever be more than half a rotation away from a head (instead of a whole rotation, as now).
Shingled recording being used in the marketplace currently? If you have some proof of this, Chris, then this should be the big story of the day. (Burying the lead. eh?) The impact of using Shingled recording on the user would be seen in performance tests and should not be sold without informing the user. If you have proof of Seagate or any other HDD mfgr currently producing a shingled HDD, please publish it. Stop writing about hearsay and opinion and do some old fashion reporting.
Clearly I'm technically-challenged, as I don't know what is meant by 'rebuild times' in the article:
>> one HDD manufacturer is telling Xyratex that 3.5-inch drives are dead, with 2.5-inch the future, due to rebuild times: "2TB drive rebuild times are heading towards a week."
Anyone care to explain? Pretty sure this isn't referring to rebuilding a RAID array (well, I hope not :/).
Charkes,
This refers to the time needed to rebuild data on a spare drive when a drive in a RAID array fails. I can imagine that the Xyratex CEO was quoting the longest time he could think of. A Pillar Data note -
(http://wikibon.org/wiki/v/Comparison_Test:_Storage_Vendor_Drive_Rebuild_Times_and_Application_Performance_Implications)
- showed a NetApp array needing almost 30 hours to rebuild a 500GB drive with 2 logical volumes on it with the logical volumes being busy during that time. If we quadruple that time to envisage how long a 2TB drive would take, that gets us to 120 hours which is four days - not too far away from the Xyratex CEO"s comment.
Chris.
I believe Amiga floppy drives used this trick to have an effective capacity of 880K on the same disks that PC drives were limited to 720K (http://en.wikipedia.org/wiki/Floppy_disk#Commodore_Amiga).
"Because the entire track is written at once, inter-sector gaps could be eliminated, saving space."
We didn't know to call it shingled in those days though :)