interesting
just hit me to do the math - 600k iops / 96 SSD = ~6250 IOPS/SSD.
Interesting to think about that vs so many SSD claims of 10s of thousands of IOPS per SSD.
Wonder which SSD they use.
Huawei has captured the SPC-1 crown for disk drive and flash arrays with a 600,000-plus IOPS result for its Dorado5100 all-flash array. The SPC-1 benchmark tests how networked storage arrays serve data requests from servers in a business environment. IBM's StorWize V7000 headed up the SPC-1 charts with a 520,043.99 result …
Now talk to me about the level of latency and how they handle that, tell me how they handle caching when they have a failed node in a 4 node or greater active-active cluster? How do they get around the issues of space reclaimation on their SSD drives?
They can have all the iops in the world, if the latency is so great that your systems are slowing to a crawl then it don't mean a damn.
the latency info is in the SPC-1 disclosure. It would be nice if there was failure testing as well, but it seems nobody does that.
No info on space reclaiming either of course but they are running with 3TB of unused capacity, perhaps a buffer to help with such an issue. Even with that 3TB of unused capacity they still beat the likes of NetApp in their unused storage utilization of about 32% (vs 43% if I recall right).
I can't imagine many customers outside of China (and immediate surrounding area) buying from Huawei regardless :)
I'd love to see SPC-1 results from the likes of Pure Storage, Nimble, Nimbus, and other names that I am forgetting.
Anyone looking for more than "One big storage box to rule them all" will look at a flash array for their transactional systems.
Also as the dedupe tech gets better and we see tiers of SSD inside a single box (DRAM Cache, Small amount of SLC for hot data and 2/3 cell MLC for "Bulk" storage) I think we will see all flash arrays in more places.
Even 10 years ago the idea of 500+ disks in the same frame was thought to be impossible, now we have arrays that scale to 2000 spindles and have 3 or 4 tiers with automatic data placement. Whats to say what another 10 years will bring?
Yes, I agree that there is a place for flash and DRAM, but that place is in the server. If you have such a high performance workload that it requires total flash or DRAM, why would you want to jump across a fiber 8Gbps network... or even a DASD array? You can throw 3 TB of RAM in a four socket server, so capacity is generally not going to be an issue. SPC-1 is a SAN benchmark. SVC V7000 is the only storage array, in the traditional sense, in the mix. Comparing a SAN array with a big cache is not appropriate or meaningful. Point being, if you are looking at SVC for a job, you would not be considering the other two arrays and vice versa. Apples and oranges.
I mean I have recently been able to dabble a bit with micro controllers and SD-cards. And even my amateurish attempts brought me to several hundred IOPS. Since that controller also contains a 100 MBit Ethernet port, I assume it would be possible to just distribute the workload to tens of thousands of those little micro controllers. Each one far less powerful than what your mobile phone contains.
You could for example map block numbers to MAC-addresses and use modified switches with fixed tables to just distribute the transactions to the individual nodes.
So where is the difficulty I'm not seeing?
The difficulty is that scaling out anything with no guarantees of reliability is easy, but scaling out while maintaining the reliability of a single node system is orders of magnitude harder to do while having performance scale out as well.
Pushing everything into the same box with a dedicated bus or interconnect, instead of a capacity bottlenecked, high latency network to transfer data around, makes this much easier - as then you're effectively dealing with a single node system with thousands of smaller subcomponents. Having said this, this strategy can work really well with streaming data, but tends to fall over when you have tens of thousands of smaller requests to process.
The SVC advertising is all very well, but what did SVC actually provide in the SPC configuration ?. it didn't provide raid or any other storage related services other than acting as a cache. Basically SVC's role in the SPC-1 benchmark was to provide a passthru device in order to make 16 separate mid range arrays, appear like a single array.. Wwho in their right mind would actually deploy such a system (another lab queen)..