back to article Samsung's million-IOPS, 6.4TB, 51Gb/s SSD is ... well, quite something

Samsung is showing off a monster million IOPS SSD that can pump out read data at 6.4 gigabytes per second and store up to 6.4TB. NVMe PCIe is the fastest SSD interface, blasting SAS and SATA out of the park and the early promise of Fusion-io is now being realised with 3D NAND PCIe flash drives. The PM1725a [PDF] comes in both …

  1. lansalot

    *cough* how much? *cough*

    1. Trevor_Pott Gold badge

      If you have to ask, you're not a "relevant" customer.

      Welcome to tech.

      1. Ian Michael Gumby
        Thumb Up

        If you have to ask...

        Then you don't have a real need for the tech...

        The real important question is how much heat does it generate and how dense can you pack em?

        ;-)

        You'll have a 4U rack and 4 to a box so that's ~25 TB of raw storage per box.

        With 10 per rack (Depending on power/weight/cooling) , that's going to be 250TB per rack.

        4 racks per PB.

        Its not that great a density relative to spinning rust, but the response time withing a cluster (think spark/big data) would be incredible. Or if you're going old school, Oracle or Vertica relational db which could also be really cool.

        Lots of other cool options too.

        These times are changing, provided they can deliver on quantity and pricing is something enterprises can afford.

        1. Jim-234

          Re: If you have to ask...

          Quite a few 2U rackmount servers can fit up to 6 of those drives (seem to be listed as PCIe, single slot, low profile). So you could pack them pretty dense. Then of course there are the custom systems designed by the all flash players that hold a lot more cards in a 2U box dedicated to them.

        2. DonL

          Re: If you have to ask...

          "You'll have a 4U rack and 4 to a box so that's ~25 TB of raw storage per box."

          You can easily put 16 enterprise SSD disks of 2TB in a 2U chassis (32TB raw) and be done for below € 20.000. And then you would be able to hot swap them and use hardware raid if you'd want to (VMware does not really support software raid).

          So I'd wonder what the target audience would be if the price would be really high. Perhaps using 1U servers and vSAN? Or perhaps situations where speed would matter more than reliability?

          1. Ian Michael Gumby

            Re: If you have to ask...

            I'd love to see pricing and price difference between the low profile and the regular sized.

            So if you can use 2U instead of 4U boxes... ok, the issue though would be power and heat constraints.

            First, forget about VMware since you wouldn't be virtualizing these boxes.

            The idea in my post was to look at tiering the storage so that you had fast storage and slower storage that would be high density and cheaper.

            So I thought of spinning rust because SSDs don't match their density and lower cost. Again here, there's a trade off due to heat and power requirements. Assuming raid 10, you would need 2X spinning rust to match the storage in the fast memory.

          2. Ian Michael Gumby

            Re: If you have to ask...

            Speed vs reliability?

            First the flash SSD is reliable. I think you meant to say resiliency.

            Lots of applications and keep in mind, you can always house the data in two locations on different types of media to get the redundancy that will give you the resiliency

          3. Voland's right hand Silver badge

            Re: If you have to ask...

            And then you would be able to hot swap them and use hardware raid

            Same thought - it is cute, but you should be able to deliver comparable IOPs, throughput and MTBF using lower class SSDs and an old-school hardware RAID controller.

      2. Levente Szileszky

        That's nonsense, Trevor...

        Seriously, if you don't ask the price you are not a serious client, you are just a niche boutique firm, planning to buy a few of these and build some skunk array in your server room.

        As nerdy and cool that might be it is certainly not the market Samsung is banking on with a development like this (think of recouping all R&D then actually making millions in profit.) To make money you need volume and when you want to sell en masse the right pricing (among other things) can make or brake a product.

  2. Sebastian A

    Hnnnnnnnng.

    1. Spacedinvader
      Paris Hilton

      what he said!

  3. Steven 1
    Coat

    Just given me a semi...

  4. anthonyhegedus Silver badge

    Pfizer isn't worried, their product is cheaper

    1. Anonymous Coward
      Gimp

      ...but nowhere near as long lasting :o(

  5. Aqua Marina

    Fix it in the firmware

    Samsung have lost my confidence at the moment.

    I had a dozen or so of the Evo 840, that kept simply corrupting and slowing down to a snail pace. It took about 2 years for the problem to be fixed in the firmware, by which time I'd simply gone out and bought reliable replacements from Crucial.

    I don't think I would stick 6TB of data on a sammy drive.

    http://www.theregister.co.uk/2015/02/22/samsung_in_second_ssd_slowdown_snafu/

  6. Anonymous Coward
    Anonymous Coward

    But if you want data security then you need to use software raid on the HHHL version, which is not going to be pretty.

    Unless someone make a PCIe raid controller then you'd need to use these as cache only. You lose a lot of performance when running the 2.5" version - probably saturating the connector.

    1. Anonymous Coward
      Facepalm

      RAID != backup

      RAID is for "redundancy"... i.e. system reliability. NOT "data security" - that's backup. There's a little clue in the name. Those two things are VERY different. You should really try your best to understand the distinction and learn this the easy way... while you can.

      1. Prst. V.Jeltz Silver badge

        quite so - when your customer says "i deleted all my work deliberately - please get it back" , which represents most backup restore jobs , RAID aint gonna help you.

        Wont help with Viruses either. or fires . or theft .

      2. Anonymous Coward
        Anonymous Coward

        What are you talking about? Data security is about keeping your data secure and RAID is part of that. It is the first defence for keeping your data secure against loss or corruption from a failing drive. Backups are a form of data security, so is securing the firewalls, protecting against malware, keeping db logs on separate volumes, etc etc. Data security is a whole raft of measures securing data from different issues and threats.

        If you feel data security starts and ends with backups then you need to keep your fingers crossed and hope that you rpo and rto are sound. I'd take another look at you risk management plan just in case.

        1. Ian Michael Gumby
          Boffin

          @ACs ... Seriously?

          You do not want to RAID these devices.

          Seriously, are you that thick?

          Consider that you have possibly 4 PCIe slots in each machine. You now have the ability to tier your storage devices. 1 copy of data on the PCIe SATA disks, and then 1 copy on raided or GFS type set up. (GFS = global file system) which protects better than raid because you have it distributed across your cluster...) This means that you will want to have both SATA and spinning rust to prevent data loss.

          Now for the fun part. Data loss / Data corruption is only part of data security. If you're working with PII or other sensitive data, you will need to also encrypt your data. Then also you would need to control who has access and then have role based authorization ...

          A bit more than just a raid discussion...

          1. J. Cook Silver badge
            Boffin

            Re: @ACs ... Seriously?

            Seconded.

            Raid is for Data protection using inexpensive disks. This PCIe flash device? NOT INEXPENSIVE. (I *might* mirror them if I was super paranoid, but that's just pissing money down the drain)

            Now, you want to use them for a classic data cache tiering scenario:

            PCIe Flash cards for "hot" tier of storage ( VDI boot storm reduction, super high end database usage, etc.)

            Enterprise SSD in raid 1 or 5 for "Warm" tier(Frequently accessed databases and other files)

            Multiple Giant Raid 6 Array of spinning rust for "cold" storage.

            I've managed to get lucky- the storage arrays we have at $employer are hybrid SSD and spinning rust drives; the SSDs act as the 'hot' and 'warm' tiers, whereas the spinning rust is a raid 6/ triple parity array of slow, large SAS drives. We've not had an I/O problem with them at all in the ~4 years we've had them in use.

            1. Anonymous Coward
              Anonymous Coward

              Re: @ACs ... Seriously?

              "Raid is for Data protection using inexpensive disks. This PCIe flash device? NOT INEXPENSIVE"

              RAID hasn't stood for inexpensive for a long time, it was changed to Independent due to the fact people kept mistaking it for only being used for cheap disks. It seems like that problem still exists.

              Either way if you are using them as a true tier and not a caching layer then your data is at risk of a single point of failure. These are designed as a step down tier from RAM (similar to Xpoint use case) rather than a step up from SSD.

          2. Anonymous Coward
            Anonymous Coward

            Re: @ACs ... Seriously?

            @Ian Michael Gumby

            That's a bit abusive isn't it? The point was made that they are would be used for caching - which is also what you stated with "1 copy of data on the PCIe SATA disks, and then 1 copy on raided" - also effectively caching - however if done manually or without a proper caching algorithm/software/hardware, either you'll slow down the fast cards while it flushes the queue to the slower drives or you'll end up with asynchronous mirroring and risk data integrity.

        2. Prst. V.Jeltz Silver badge

          "What are you talking about? Data security is about keeping your data secure and RAID is part of that. "

          no nes saying dont RAID , ( well , some are actually) I'm just saying RAID in no way elimates the need for backups.

          I have both at home , but im gonna ditch the raid , to increase capacity , and keep the backups.

      3. Dave Bell

        There are so many choices within RAID. I have seen it argued that some styles of RAID don't make as much sense now, with huge drives, as they did a decade ago. If you're using RAID 5, how long will a faulty 2TB drive take to rebuild? Even simple mirroring takes time.

        I am really not sure of my sources on this, but I am almost glad that this is so far out of my budget.

  7. Anonymous Coward
    Anonymous Coward

    Will it fit in a Dell Optiplex G1?

    I need to get better performance from Windows ME, will this help?

    1. Stuart Halliday

      Re: Will it fit in a Dell Optiplex G1?

      Yes. But then anything would.

  8. Anonymous Coward
    Trollface

    Also...

    Could help Windows 10 to boot in a reasonble time

  9. itzman

    This sort of disk..

    means you could probably practically power down your CPU and DRAM if idle...knowing that you could restore in an instant.

    I think SSDs are the biggest leap that computers have made in the last 10 years.

  10. Anonymous Coward
    Anonymous Coward

    Filesystem cache

    I'm starting to wonder why.

    Sometimes when I open an old tab I've not used for a while (I'm a bit of a tab-whore you see) I look at dstat to find out what's taking and see paging /in/ sustained at 120mb/s.

    My like... 1.5-2 years old 2.5" SSD is probably like under and order of magnitude slower than my RAM. I can only imagine what this on the PCIe bus could achieve.

    I have had drilled into me, and have pumped into others the log-plot graph's shape of response time, transfer rate, and cost with size and this is really starting to upend it!

  11. cspada

    Write lantecy consistency

    Any data about for how long they can keep this write latency?

  12. quickmana
    Linux

    latency, raid, mdadm, ramblings

    @cspada asks a good qeustion...

    I would love to get my hands on a few of these things!

    If you used a raid controller that introduces a 10 microsecond penalty then you would take a 10%+ hit on advertised latencies.

    I would like to see benchmarks with the kernel running raid mdadm vs "hardware" raid controllers. Software might be the faster choice with these.

    I would love to see a graph of latency as you approach that 1 million iops... i'm sure it slips into millisecond oblivious before it maxes out.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020