back to article HDD data density to hit 2.4Tb/in² 'by 2014'

World+Dog may have its eye on solid-state storage, but hard disk engineering isn't going to run out of steam any time soon. Disks capable of holding an amazing 2.4 trillion bits in each square inch of their surface are coming within the next five years, an industry executive has forecast. Speaking at Japan's Information …


This topic is closed for new posts.
  1. Richard


    Want one

  2. BlueGreen

    Fantastic. And back in the real world...

    error rates, reliability, power consumption etc please. All the boring stuff that matters if you plan to actually use them.

  3. A

    Why bother?

    SSD's are faster, and they're already ahead in the areal density stakes. About the only reason I can think of is that they *might* be cheaper than an SSD. That said we're talking about five years time, so I suspect by that point SSD will beat it on every parameter.

  4. Ian Bradshaw

    poor home users

    who will end up being sold 1 disk for the rest of their life ... which then fails loosing the world + dog.

    pc makers had better be kind and develop hidden raid mirroring on standard pcs (no end user will be any use at configuring raid) ...

    the tech exists to do it .. my new sony Z3 has hidden raid (it actually uses 2 SSDs striped on an intel controler to give the storage capacity, but hides the raid stuff from the OS and even the bios boot sequence unless you go tinkering).

  5. Henry Wertz Gold badge


    I have three comments on this...

    a) Awesome, I want one too.

    b) They'd better get the bugs worked out on the 1.5TB disks before they start trying 3TB, 15TB, etc... a friend's been working for someone with large storage requirements, the 1.5s from every vendor they've gotten so far have been basically unusably buggy, with firmware updates not helping.

    c) Guys (especially Microsoft) better hurry up with large disk support. Regular partition tables have a 2TB limit (EFI doesn't though I think.) ext2 & ext3 have kludges to get to maybe 16TB, but normally are limited to 2TB. NTFS is in a similar boat. JFS, XFS, and I think ext4, support large file systems (Sun and IBM AIX have large file systems supported for a while now too.)

  6. Haku


    I'd hate to be the one doing the tape backups from even a single 15TB drive with today's consumer capacity tapes.

  7. factor _

    More vapourware from Hitachi GST

    Hitachi GST has been a load of hot air on bigger disks for the last few years.

    In 2008 they were talking about 5TB by 2010. In 2009 they are talking 15TB by 2013. However they have delivered nothing in the last few years: their biggest current disk is 1TB -- well short of the offerings from WD, Seagate & Samsung.

    Perhaps less marketing BS from Hitachi until they deliver a real product?

  8. Frank

    @Ian Bradshaw rre. poor home users

    If they are using 2 SSDs striped, that would be for access speed. Striping is for speed, mirroring is for data recovery.

    Apart from 'specialist' kit, I can't imagine any maker of home PCs putting an extra HD in the box to give RAID mirroring since the average consumer would not appreciate it and would just moan about the higher price. Aside from that, the average consumer would need a return to shop process to fit a replacement HD and restore the mirror array after failure.

    If storage reliability is a problem for the 'new' HDs then the best way forward would be for the HD itself to perform mirroring, perhaps placing mirrored data on another platter. This would give an improvement in data storage reliability but would halve the effective capacity of the HD.

    I suspect that average home users will have to learn simple but robust backup procedures involving external drives. After your first experience of losing a couple of years worth of downloaded music files then you soon learn that techinique :)

  9. Steve Foster

    @Henry Wertz

    MBR partitioning has a 2TB per partition limit. Switch to GPT (supported today) to surpass this.

    NFTS volume limit is 2^64-1 clusters, though Windows implementations so far (upto WS2003/XP at least) work with a lower maximum of 2^31-1 clusters. This gives 16TB (-4KB) with the default 4k cluster size, rising to 256TB (-64KB) at 64k clusters today, with scope to raise the limit to 1 YB(-64KB), which is more than an order of magnitude more storage (per volume!) than currently exists across all HDD in the entire world (according to Wikitrivia anyway).

  10. Andy Barber

    Sinclair Microdrive

    They were rated at 100kb but normally only gave <up to> 90kb.

    They worked fine for me!

    I actually thought after adding the third drive to my Sinclair QL as <excessive>

  11. Steven Jones

    Think about the IOPs...

    The fundamental problem with increased areal density is that performance relative to the capacity takes gets ever worse. Unless disks can be spun faster (and they are pretty well at the physical limits now) and the heads moved more quickly then things only get worse. The number of random IOPs on HDDs has barely changed since 15K drives are available and there is absolutely no sign of any major move in that area. Sequential access speed goes up only linearly whilst capacity goes up to the square. Broadly, quadruple the capacity and you only double the transfer rate - os it takes twice the time to read all your data.

    Access density is going to get ever worse with these beasts. Hard disks are increasingly going to get use for near-line archival use. Essentially they are going to head more the way of tape. A good SSD will slaughter an HDD on IOPS performance, low latency and so on. In 5 years time HDD will get consigned to budget systems and near-line semi-archival purposes. I'll use them to keep my photos on, but I don't want my system disk or high throughput OLTP services to be one of these in 2014.

  12. Michael

    Hard disks are increasingly going to get use for near-line archival use

    =pr0n collection.Need improved collection manager.

    NFTS volume limit is 2^64-1 clusters, though Windows implementations so far (upto WS2003/XP at least) work with a lower maximum of 2^31-1 clusters.

    Dosent apply with EXT or ZFS tho. I can see a case where XP luddites will eventually have to move ...possible hardware war on the horizon between FOSS and M$ fanbois?

  13. Ian Bradshaw

    @ frank

    "If they are using 2 SSDs striped, that would be for access speed. Striping is for speed, mirroring is for data recovery." ... yeah I know ... it was more the technology exists to hide it (which I hadn't seen in action before). Personally I suspect it was to keep the price down and use smaller SSDs for the same capacty tbh ... not seen it mentioned anywhere in their literature and not before I bought it either (not that im complaining its there tho).

    not sure how there gonna backup 15TB though ... new usb speeds and the like maybe ... but even so ... a 15TB backup would make most give up I suspect.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2021