Sounds good, no idea what it means
HD tech always seems like something written by a scifi author.
However, 20Tb on a 3.5" drive. Crazy.
Western Digital said demand for high-capacity data centre disk drives will keep up over the next few years as it told the world it would begin shipping samples of its new MAMR 18 and 20TB drives over the next four months. The Ultrastar data centre DC HC550 is a helium-filled drive in 16TB and 18TB versions. It uses either …
> HD tech always seems like something written by a scifi author.
The BOFH in me wants to tell the boss that Microwave-assisted magnetic recording technology, or MAMR means that we'll need a 'backup' microwave in the server room in case one of these new disks fails. Should be good for warming up a pasty properly rather than disconnecting the thermal sensor on one of the hotter servers.
The advertised size is correct.... going by how that number was derived by the hard drive manufacturers (base10) vs binary which is what most computers display size as.
And technically, the Hard drive manufacturers are correct in following the letter of the SI prefixes. After all. Kilo only means 1024 when we want it to, (when discussing binary things) At all other times, it means 1000. A kilometer is 1000 meters, not 1024 meters. A kilogram is 1000 grams, not 1024, a kiloton is 1000 tons, etc. So to hard drive manufactures, a kilobyte is 1000 bytes. and a megabyte is 1000000 bytes, and so on.
Sloppy writing by me. Should have said pure 16TB drive. It can make the HC550 a 16TB drive by dropping a platter but WD guy said it was skipping pure 16TB drive level to go straight to 18TB and 20TB (with shingling). That's because it reckins it will be faster to market with 18 and 20TB drives than Seagate.
Took a new employee to the National Museum of Computing the other week, and mentioned that "back in the day" we had DEC RA81 450MB drives, stacked 3 to a 19" cabinet, they were the latest thing at the time.
GB capacities were just about unheard of and I don't think anyone envisaged storage in terabytes.
The first HD I owned was 120MB, which today sounds impossibly small, but back then was the optional higher-spec upgrade option over the 80MB one Commodore fitted as standard to that model of Amiga...
A year or so later I was in the lab at uni when one of the other postgrads rebooted their seemingly mundane Windows PC, and I did a serious double-take when I saw the POST screen indicating 80MB of installed *memory*.
Good old days.
I can remember the first GB drive appearing in the Uni Physics department when I was there doing my PhD (a bit less than 25 years ago)...
Rather sobering to think I now have more storage capacity on my keyring than the whole department probably had back then.
But then I still get incredulous looks from my kids when I tell them that when I was their age, mobile phones and the internet basically didn't exist (at least for Joe Public and certainly not for spotty minions).
I'm clearly a young wippersnapper since my first PC (in 96) came with a 20Gb drive.
Cooincidentally I got a new laptop yesterday with 16Gb of RAM, for pretty much the same price as that 1996 PC which had 16Mb of RAM. But it came with Theme Hospital therefore was better (bloatyhead FTW).
The writes to a replacement drive in a RAID rebuild are sequential, so there's no difference between traditional and SMR drives. The gotcha for SMR drives is that if you want to write to an individual track you will have to read and re-write adjacent tracks in the "group" (not sure what they call it)
I would think you'd need RAID software that knew how to handle SMR drives though, if the RAID block/stripe sizes aren't matched to the SMR track "groups" the results would be ugly.
Isn't it only write speeds that are worse for SMR though? In scenarios where you're using the drives in something approximating a WORM-type scenario (i.e. almost entirely consisting of data reads with only very occasional writes), the extra space will then probably be more beneficial than faster write speeds - particularly in large storage arrays where an extra 2TB of storage per drive bay would quickly add up.
As far as I'm aware, SMR and storage arrays do not get on well together.
I tried putting 7 of them in a RAID Z2 array, just to see what happened. For about the 100GB or so, it worked great, then it got really slow, as in bytes per second, then it just completely keeled over and failed. Other people who tried it have reported similar things.
Yes - this. I replaced all my SMR drives from Seagate due to horrendous read and write performance on a NAS. The Enterprise drives are still PMR and are fantastic. Potential for data loss is higher with the overlapping of tracks, and can you imagine if the read/write head somehow changes position in the drive?
Over the weekend, I did my monthly backup. This is the "take the 3TB disk out of the fireproof safe, do the backup, put it back in the safe for another month" backup, not the daily backup, or the offsite one.
It failed for the first time, ever. Apparently, my 4TB disk in my PC now has 3.09TB of data, so it could not be backed up onto the 3TB backup disk. So, I went to the shop, bought a 4TB WD Blue for C$99 (about 68 Euros), and did a backup. So, I've now got a spare 3TB lying around.
It occurred to me to do some math. That 4TB disk is 200,000 times the storage capacity of my first 20MB disk in 1985. And at $99, not even counting inflation, it was one tenth the cost. Going by price, storage capacity has increased two million fold over the past three decades. I picked up an 8TB three years ago on sale for $160 or so, and the 10TB, 12TB, and even 14TB have been on sale for a while now.
So, as glad as I am to hear it, increases to 18TB or 20TB don't really shock me nowadays. When they stop quoting TB and start quoting PB, then we'll have reached the next level.
Although I used 10, 20 and 40MB HDs, I never put my own money into a hard drive until they were cheaper than 1.44 MB floppies. When floppy storage was $1/MB, I couldn't justify the personal expense of a $10/MB hard drive (I was poor). Floppies haven't fallen in cost very much: I can still get them for around $1 each. And a 20TB drive is just 20 million floppy disks........
When floppy storage was $1/MB
Whimper.
Back in 1984, when PCs were big, and the AT hadn't even been released yet, it was a big deal that DOS 2.x allowed you to reformat those DSDD 320kb floppies as 360kb. That allowed 1/3rd of a megabyte on a single disk. That 12% increase doesn't sound like much, but compared to other media, like 8" floppies at 88kb, and "high density" single sided media of 180kb, 360kb was huge.
Slow, but huge.
The going rate was something like $12 a disk, so a MB was about $35. If you bought a box of 10 Dysans, I remember they were "only" $99, so 20% cheaper than buying individual floppies.
So, 20MB of floppies would be 60 disks, or about $720 ($600 if buying in bulk). But given the horrible 80ms speeds, as opposed to 20ms for the hard drive, not to mention not having the flip through dozens of floppies, and split files, etc., the hard drive was well worth the 35% price increase over the floppies.
When the AT came out, and 1.2MB floppies appeared, they cost $99 for a box of ten, and the 360KB prices dropped significantly. But as anyone who's ever tried to back up a 20MB disk onto them (hello, FastBack), they were still a pain to use.
In the good old days I recall spending $2,000 CAD for a "gigantic 2MB" external hard drive for use with my AppleII. Lots of fun peeking and poking on that machine, with 64K RAM expanded to 512K on cards I built to bank switch the top 16K. Another card with Zilog Z80 to run real CP/M properly, rather than the crap Microsoft tried to flog. Things have sure changed since the 1970's, and looking back I would love to return to the days of Peace and Love (the 1960's) before our world became so divided and turning into the Total Surveillance and control dream of the Technocrats.
If the data you store on your HDD is worth enough to pay the thousands of dollars to a data recovery mob to get it back for you, it'd be much cheaper, not to mention more quickly recoverable, reliable and effective, to both use RAID (for disk failures within the array (for the number of disk failures the array is set up to allow for)) and implement a local (for 'box' issues, and for user issues - RAID won't protect you from a delete instruction or format instruction) and an offsite (for room/building/campus/neighbourhood issues) backup strategy.
Therefore you could use SSDs because obviously if the data was that valuable you'd follow the same procedures as you'd use with valuable data on HDDs, namely RAID augmented with local and offsite backups.