Surely anyone with this volume of data is going to be using redundant systems? I'd be scared to put any more than 500Gb on anything not RAID...
One terabyte hard drives still cramping your decadent data storage lifestyle? No more tears. Seagate is rolling out 1.5TB HDDs this August. Seagate's 1.5TB Barracuda 7200.11 will use four platters to cram the scale-tipping new raw capacity into an eleventh generation of its flagship drive. The storage firm points out its the …
Surely 640k is enough for anyone? :D
On a more serious note, the big problem for HDD manufacturers in the near future is the movement of data to cloud-based serivces. With increasing broadband speeds (100Mb/s+ when they finally roll out FTTH, 24Mb/s down now in a lot of places), ther is very very little you'd need to keep on a local machine. Aboutthe only app I can think of is HD video editing, and thats a little bit niche. So, why have multi-TB home storage when you can slap it all into a datacentre somewhere and have it automatically backed up to enterprise standards (with encryption etc so they can't read your data, naturally).
Yes, it is probably a bit iffy to run your business with all the data in the cloud at the moment - downtime=money lost. However, for a home PC, meh - I can't play games today. Complaint time - PCsRUs has refunded this month's fee - result! The guarantee that you will never ever lose data (again) is a pretty convincing reason for me to go all web 3.0.
Saying that - all these cloud services providers are going to require really, really large amounts of really dense storage - where could they get that? Oh, thats a neat circle...
While cloud-based services are kind of attractive, I still like to have my personal data stored locally and not depend on people I've never met, in a location I've never personally seen, to keep my stuff safe & private. At least I'd like to have a duplicate of my data locally available for network outages, offline use, etc.
And ultimately I'm not sure what your point is, since even if people no longer keep data on their desktops, the data still has to be stored somewhere---so someone will be buying hard drives, and increasingly bigger ones if everyone adopts the cloud 'paradigm' or whatever. And more of them than a desktop user would need, if the drives are to be RAIDed. So the HD mfrs. are still going to be selling drives, and probably not the relatively cheap consumer-grade ones, but the $pricey$ 10,000+ RPM SCSI variants. So I expect they'll do OK in the foreseeable future.
Not everyone has fat pipes to the 'net. I'm in a hotel in Brazil, and sometimes I see 1400ms in my pings to google. AV updates which takes seconds at home over cable can take 10 minutes here. Downloading a big word document from the "storage cloud" would take an agonizingly long time.
Even in the states, I've stayed at plenty of hotels that had measly little pipes to the web.
So no, I'm not going to be storing my files online.
I recall back then, when I worked for EMC, persuading a customer that the switch from 9GB to 18GB wasn't the end of the world, and that no, we didn't need to cut the drives in half (logically, of course - what were you thinking?) so each spindle still only had 9GB to manage..
I watch with interest to see where the performance bottlenecks are moving to. Brown/round/rustable was always it in the past. Does 4 platters mean decent sustainable sequential IOs now?
This post has been deleted by a moderator
>I'd just like a 160GB disk that
Ah, but I assume your not storing every single piece of data imaginable about private citizens from telephone conversations through web activity to bowel movements.
As for your comment, I fully agree. I wish HDD designers would spend more effort on reliability than capacity.
I also noticed this with Seagate drives and have started looking at other manufacturers. The problem is knowing who makes what for who and in which plant.
There are reasons I want to keep a local copy of data. First there is the little issue of trusting an organisation to keep it safe, not go bankrupt, not make an operations error, not have downtime and so on. Personally I think it's sensible to keep at least two independent copies of data and one of those might as well be local even if we do get data vaulting services usable for TB data volumes.
The other main issue is performance. Maybe we'll have 100Mb/s symmetric BB services, but that's going to take a long time to be universal. It is also nothing like fast enough. I can easily generate 16GB of RAW photgraphic files in a day - converted to 16 bit TIFF and that's over 50GB. There are enthusiast-level DSLRs on their way which will double the size of those files. Then all that bandwidth has to end up on servers and networks which can cope with the load. There's no P2P-type kludge that is going to work here. I get performance problems when averaging 40-50MBps with current volumes.
Then there is one issue that increased bandwidths can't deal with, as its bolted into the laws of physics. That's network latency. Over fibre a round-trip will take about 1ms per 100km length (200km total). That's without taking into account real physical routes along which the fibre runs which is almost invariably not going to be in a straight line. Try pinging across the Atlantic and most of that delay is just sheer distance. So for synchronous transactional traffic requiring low-latency you had better hope that the service and data is local.
So local storage is not going to disappear. However, fundamentally the biggest problem with large disks is that performance does not scale in line with capacity. Maximum read speed scales with linear bit density, capacity scales as to the square of linear bit density once maximum viable spin speeds have been reached for a given format drive. Increasing capacity by adding platters means performance doesn't increase at all. Random access times hardly improve at all with increased bit density So accessing all that data is going to get, proportionate to its volume, worse and worse with spinning media like this, and it doesn't much matter if the disk is installed in a PC or a data centre. It can be improived to some level with cache, but there are limits. It's this fundamental performance scaling factor with these large disks which is their Achilles heel. Even if you could read the entire disk at the maximum sustainable 120MBps rate (which I really, really doubt) it would take about 3.5 hours to read all that data. In practive allow at least double that. Ask somebody who rebuilds very large RAID-5 sets with really large disks how long it takes.
(OK - I realise the data-security-in-the-cloud issue applies to my bank account too, but even I draw the line at keeping my cash under the mattress).
Can you imagine the furore if you had all your data in a cloud and some little twonk who is working for them and is also paid by the local criminal gang starts to run decryption on the central data.
Also, as it is illegal not to give up your passwords to encrypted data when asked by the Police, you're data is freely available at any time for them to do fishing expeditions.
I'd rather have the disk in my house and under my lock and key and then deal with backups myself. But that is just paraniod me, nothing to hide, I just want to remain private.
Paris coz she must need 1.5TB drives to store the videos of herself...
"Aboutthe only app I can think of is HD video editing, and thats a little bit niche."
There's a host of applications that rely on a fast and reliable sustained data read/write such as Cubase, Avid and many other music, media, video and post-production tools. Enthusiasts and freelance media authors simply wouldn't get this performance from a cloud service. And if you're used to SCSI performance, forget it.
And Tim - I've also noticed a drop in quality. I used to swear by Seagate for reliability but I've had 2 barracudas (< year old) die on me in as many months.
I've got a HD camcorder and a 10mp camera. On a week holiday to the Isle of Wight we'd generated 20gb of data! Same again for a week on Gozo, and there's the trip to Las Vegas coming up...
We have a media center with 4 tuners and lots of series links... that can record a good 5-10gb a day if left to it's own devices.
Then there's the Home Server storing backup images of all the household PCs, which is currently on 700gb of back up data alone.
I'm not saying I'm an average Joe, but 1.5tb drives can't come fast enough! I was kinda hoping the next step would be 2tb :(
"I'd just like a 160GB disk that runs no hotter than 20C, has a 500,000,000 hour MTBF, 20 year obsolescence warranty and a cost that's less than a tank of gasoline"
Make sure you buy it in the UK then, there aren't that many hard drives on the martket that cost less than a full tank these days.
Re: By design and What is the quality?
By Chris W
I believe you truly understood what I was really saying. And yes, I stopped storing bowel movement data when I discovered dietary fiber; a real moving experience.
Actually I have several sizes of disks depending on what role each plays, but yes, I'd prefer an emphasis on reliability and not size. And at my age (recent retiree) that comment holds true in a number of arenas... if you get my drift. :+)
Firstly one of the reasons why a drive like this would be popular for cheap storage for enterprises vault systems. As it is not a front end system then performance does not matter.
And to all the skeptics about the 50 years, the figure is correct. IBM released their first hard drive, the 305 RAMAC, in 1956.
Their fantastic, especially with todays drive management applications and options. Now... start increasing production on 10K-15K+ RPMs to bring the prices down on them.
Less than 10K rpm on anything over 500MB just sux.
Now-days... it is the hard drive slowing things down. Not processors or lack of memory.
1.5 terabytes = 1,572,864 megabytes.
1,572,864 divided by 1.4 = 1,123,474.29 floppy disks.
Or 99.8 kilometers end-to-end.
That's 62 miles long.
Or 5,851 feet high stacked on top of each other.
That's 1.1 miles high.
Or a platform 12.6 kilometres square.
But you couldn't do moonwalks on it because the little silver sliders would catch on your socks.
Have a good weekend y'all.
Seconded. I used to swear on Seagate. And their 5-years-warranty (at least here). Then I had one or another failing. To be precise: losing data; and rumbling almost one minute when added, e.g. as USB, before 'fdisk -l' sees them. But their proprietary diagnosis says read and write is okay, one way or another. Result: They threaten to sue costs out of me if I return it.
So I consider the drives trash. Pun intended, and there they go.
Its possible. Going from a 1TB drive to a 1.5TB is a whopping increase in size.
I bet WD will continue in there unique drive sizes (640gb anyone!) and have a 1.28TB drive.
A 48 disk 8 U Chassis (http://www.cidesign.com/product_detail.jsp?productID=69) full of 1.5TB disks.
Infact make it a pair. Run each in raid 6 and then use ibrix to mirror the boxes.
~69TB of safe storage. 69TB of Porn.... hehehe
Paris, because the 69TB filling has got to start somewhere.
Hard drive size bumps are neck and neck with my ever-expanding video collection.
As for cloud computing, these are fantastic! I have my very own Linux server "cloud" serving out television media, and RAID 5 data storage for a few home users.
As for needing speed, you try accessing 2 video streams at once, while serving files and doing random access reads and writes ala BitTorrent, sharing the Slackware 12.1 ISO images, of course! ;)
...how people complain about increases in obvious product performance metrics. I mean nobody is forcing you to buy / use a large drive. And it really isn't your place to worry about the morons that'll lose all of their pornos if they trust that all-important data to a single large drive. If an item doesn't appeal to you, the proper response is to ignore it, not rant to people about how it can't appeal to them either.
Anyone that's serious about their data will run something redundant, and there will always be the need for more storage. Arguing against larger drives makes just about as much sense as arguing against higher efficiency electricity generation - what if that lets those terrible electricity companies make more money from less coal!
Any people mostly seem not to realize that storage increases directly cause higher sequential read/write performance, since higher density data goes under the rw heads faster at a given RPM. 120MB/s is actually pretty damned good - You can't really push that over gigabit ethernet, especially if you've got a few in raid5 on a decent controller, so they should be plenty fast for any reasonable home fileserver.
Also anyone operating under the idea that a 10k or 15k drive is essentially just the same technology as a 7200rpm drive, with some more rpms added on is wrong. Vibration, platter strength, and all sorts of things become more complicated at a geometric rate as rpm increases.
As for that cloud idea, I've got 2 problems. 1) My data is /my data./ Don't think I'm going to contract out to some losers in a half-rate datacenter to hang onto it for me. 2) Since when is 100mbit (especially 24mbit) considered fast? I get extremely irritated on the rare occasion when I've got to transfer some things over 100mbit (say a VM image to my old laptop,) and I think I'd need to get a serious supply of good books if I had to wait at 24mbit speeds for everything. I recently had to convert a full 500GB drive from ntfs to a sane filesystem, so I copied the contents to a temporary lv on my fileserver, formatted, and copied back. That operation took long enough at 60-70MB/s, but I imagine that I'd still be waiting for the first copy to finish if I had to deal with even a fast fiber internet connection.
And yes, Slackware ISOs do need their sharing : -)
Biting the hand that feeds IT © 1998–2022