*cough* how much? *cough*
Samsung's million-IOPS, 6.4TB, 51Gb/s SSD is ... well, quite something
Samsung is showing off a monster million IOPS SSD that can pump out read data at 6.4 gigabytes per second and store up to 6.4TB. NVMe PCIe is the fastest SSD interface, blasting SAS and SATA out of the park and the early promise of Fusion-io is now being realised with 3D NAND PCIe flash drives. The PM1725a [PDF] comes in both …
COMMENTS
-
-
-
Tuesday 30th August 2016 20:08 GMT Ian Michael Gumby
If you have to ask...
Then you don't have a real need for the tech...
The real important question is how much heat does it generate and how dense can you pack em?
;-)
You'll have a 4U rack and 4 to a box so that's ~25 TB of raw storage per box.
With 10 per rack (Depending on power/weight/cooling) , that's going to be 250TB per rack.
4 racks per PB.
Its not that great a density relative to spinning rust, but the response time withing a cluster (think spark/big data) would be incredible. Or if you're going old school, Oracle or Vertica relational db which could also be really cool.
Lots of other cool options too.
These times are changing, provided they can deliver on quantity and pricing is something enterprises can afford.
-
Tuesday 30th August 2016 21:16 GMT Jim-234
Re: If you have to ask...
Quite a few 2U rackmount servers can fit up to 6 of those drives (seem to be listed as PCIe, single slot, low profile). So you could pack them pretty dense. Then of course there are the custom systems designed by the all flash players that hold a lot more cards in a 2U box dedicated to them.
-
Tuesday 30th August 2016 21:17 GMT DonL
Re: If you have to ask...
"You'll have a 4U rack and 4 to a box so that's ~25 TB of raw storage per box."
You can easily put 16 enterprise SSD disks of 2TB in a 2U chassis (32TB raw) and be done for below € 20.000. And then you would be able to hot swap them and use hardware raid if you'd want to (VMware does not really support software raid).
So I'd wonder what the target audience would be if the price would be really high. Perhaps using 1U servers and vSAN? Or perhaps situations where speed would matter more than reliability?
-
Wednesday 31st August 2016 12:08 GMT Ian Michael Gumby
Re: If you have to ask...
I'd love to see pricing and price difference between the low profile and the regular sized.
So if you can use 2U instead of 4U boxes... ok, the issue though would be power and heat constraints.
First, forget about VMware since you wouldn't be virtualizing these boxes.
The idea in my post was to look at tiering the storage so that you had fast storage and slower storage that would be high density and cheaper.
So I thought of spinning rust because SSDs don't match their density and lower cost. Again here, there's a trade off due to heat and power requirements. Assuming raid 10, you would need 2X spinning rust to match the storage in the fast memory.
-
-
-
Friday 2nd September 2016 11:24 GMT Levente Szileszky
That's nonsense, Trevor...
Seriously, if you don't ask the price you are not a serious client, you are just a niche boutique firm, planning to buy a few of these and build some skunk array in your server room.
As nerdy and cool that might be it is certainly not the market Samsung is banking on with a development like this (think of recouping all R&D then actually making millions in profit.) To make money you need volume and when you want to sell en masse the right pricing (among other things) can make or brake a product.
-
-
-
-
Tuesday 30th August 2016 11:37 GMT Aqua Marina
Fix it in the firmware
Samsung have lost my confidence at the moment.
I had a dozen or so of the Evo 840, that kept simply corrupting and slowing down to a snail pace. It took about 2 years for the problem to be fixed in the firmware, by which time I'd simply gone out and bought reliable replacements from Crucial.
I don't think I would stick 6TB of data on a sammy drive.
http://www.theregister.co.uk/2015/02/22/samsung_in_second_ssd_slowdown_snafu/
-
Tuesday 30th August 2016 13:36 GMT Anonymous Coward
But if you want data security then you need to use software raid on the HHHL version, which is not going to be pretty.
Unless someone make a PCIe raid controller then you'd need to use these as cache only. You lose a lot of performance when running the 2.5" version - probably saturating the connector.
-
-
Tuesday 30th August 2016 19:07 GMT Anonymous Coward
What are you talking about? Data security is about keeping your data secure and RAID is part of that. It is the first defence for keeping your data secure against loss or corruption from a failing drive. Backups are a form of data security, so is securing the firewalls, protecting against malware, keeping db logs on separate volumes, etc etc. Data security is a whole raft of measures securing data from different issues and threats.
If you feel data security starts and ends with backups then you need to keep your fingers crossed and hope that you rpo and rto are sound. I'd take another look at you risk management plan just in case.
-
Tuesday 30th August 2016 20:27 GMT Ian Michael Gumby
@ACs ... Seriously?
You do not want to RAID these devices.
Seriously, are you that thick?
Consider that you have possibly 4 PCIe slots in each machine. You now have the ability to tier your storage devices. 1 copy of data on the PCIe SATA disks, and then 1 copy on raided or GFS type set up. (GFS = global file system) which protects better than raid because you have it distributed across your cluster...) This means that you will want to have both SATA and spinning rust to prevent data loss.
Now for the fun part. Data loss / Data corruption is only part of data security. If you're working with PII or other sensitive data, you will need to also encrypt your data. Then also you would need to control who has access and then have role based authorization ...
A bit more than just a raid discussion...
-
Tuesday 30th August 2016 23:57 GMT J. Cook
Re: @ACs ... Seriously?
Seconded.
Raid is for Data protection using inexpensive disks. This PCIe flash device? NOT INEXPENSIVE. (I *might* mirror them if I was super paranoid, but that's just pissing money down the drain)
Now, you want to use them for a classic data cache tiering scenario:
PCIe Flash cards for "hot" tier of storage ( VDI boot storm reduction, super high end database usage, etc.)
Enterprise SSD in raid 1 or 5 for "Warm" tier(Frequently accessed databases and other files)
Multiple Giant Raid 6 Array of spinning rust for "cold" storage.
I've managed to get lucky- the storage arrays we have at $employer are hybrid SSD and spinning rust drives; the SSDs act as the 'hot' and 'warm' tiers, whereas the spinning rust is a raid 6/ triple parity array of slow, large SAS drives. We've not had an I/O problem with them at all in the ~4 years we've had them in use.
-
Wednesday 31st August 2016 08:01 GMT Anonymous Coward
Re: @ACs ... Seriously?
"Raid is for Data protection using inexpensive disks. This PCIe flash device? NOT INEXPENSIVE"
RAID hasn't stood for inexpensive for a long time, it was changed to Independent due to the fact people kept mistaking it for only being used for cheap disks. It seems like that problem still exists.
Either way if you are using them as a true tier and not a caching layer then your data is at risk of a single point of failure. These are designed as a step down tier from RAM (similar to Xpoint use case) rather than a step up from SSD.
-
-
Wednesday 31st August 2016 08:09 GMT Anonymous Coward
Re: @ACs ... Seriously?
@Ian Michael Gumby
That's a bit abusive isn't it? The point was made that they are would be used for caching - which is also what you stated with "1 copy of data on the PCIe SATA disks, and then 1 copy on raided" - also effectively caching - however if done manually or without a proper caching algorithm/software/hardware, either you'll slow down the fast cards while it flushes the queue to the slower drives or you'll end up with asynchronous mirroring and risk data integrity.
-
-
Thursday 1st September 2016 07:28 GMT Prst. V.Jeltz
"What are you talking about? Data security is about keeping your data secure and RAID is part of that. "
no nes saying dont RAID , ( well , some are actually) I'm just saying RAID in no way elimates the need for backups.
I have both at home , but im gonna ditch the raid , to increase capacity , and keep the backups.
-
-
Wednesday 31st August 2016 07:04 GMT Dave Bell
There are so many choices within RAID. I have seen it argued that some styles of RAID don't make as much sense now, with huge drives, as they did a decade ago. If you're using RAID 5, how long will a faulty 2TB drive take to rebuild? Even simple mirroring takes time.
I am really not sure of my sources on this, but I am almost glad that this is so far out of my budget.
-
-
-
Wednesday 31st August 2016 04:58 GMT quickmana
latency, raid, mdadm, ramblings
@cspada asks a good qeustion...
I would love to get my hands on a few of these things!
If you used a raid controller that introduces a 10 microsecond penalty then you would take a 10%+ hit on advertised latencies.
I would like to see benchmarks with the kernel running raid mdadm vs "hardware" raid controllers. Software might be the faster choice with these.
I would love to see a graph of latency as you approach that 1 million iops... i'm sure it slips into millisecond oblivious before it maxes out.