Wow that's fast for the size.
No indication of at least list pricing though? I'm sure it's expensive but it's useful to have at least a ballpark figure...
EMC has launched its all-flash, rack-scale DSSD D5 array* offering 10 million IOPS and 144TB in 5U of rack space. Other headline numbers for what EMC calls its rack-scale flash array are 100 microsecond latency and 100GB/sec bandwidth. Consider the D5 product as being suited for “emerging next-generation applications based on …
I saw the recent post on E8 Storage (E8 Storage ... IOPS flash dazzler), this looks like a rather monolithic custom-hardware version with similar spec. It's PCIe rather than Ethernet connected, 5U rather than 2U and fixed size rather than scalable by adding new (commodity) SSDs.
C24 Technologies Ltd.
This is definitely a game changer. Compared to XtremIO which puts out a mere 150K IOPs for a 36TB-equivalent config, this seems to be orders of magnitude higher in terms of IOPs and much better in terms of latencies. Kudos to the DSSD team for pulling this off!
However there are some drawbacks
It's an expensive box. This link below says $1M for entry level config. That is expensive.
Also, the PCIe connectivity means that this cannot be used in general by Enterprises pre-wired for FC or Ethernet connectivity. So unlike other Flash systems, this is not a plug and play. There is no mention of Enterprise class features such as snaps. And it seems like app plugins are needed. "Rack-scale Flash" would mean no choice on servers except for VCE. A great strategy by EMC to rake in more $'s, however customers will suffer from lack of choice.
But nothing can take away what the DSSD folks have achieved here - this is a great box. Hope to see more of these kinds of systems in the future with Enterprise class connectivity options and more features, and of course at a palatable price
DSSD does have good IOPS, but is it the lowest latency game in town at 100us?
Since another poster brought it up, IBM FlashSystem 900 - Standard FC interface as low as 90us write latency in only 2U rack space.. Optionally, it's sold with Spectrum Virtualized stack via either SVC or FlashSystem v900 to get data services and data reduction
Now, anyone notice it's under a very brief 1 year warranty? By the time you get DSSD uniquely cabled up to your 48 servers are the components failing?
DSSD, welcome to the party. ; )
FlashSystem could have been great, but IBM got a raw product from TMS and didn't invest in it. Instead of writing or buying a next gen software stack, along the lines of Tintri's VMstor, they just added their existing SVC. SVC is a solid, but legacy software stack. It uses copy on write snaps instead of redirect... XIV has a way better snap model than SVC/FlashSystem. The third party ISV integration isn't there because VMware/Hyper V were not really around when SVC was created. No integrated snap management. You cannot do VM based snaps, replication, management. There is no dedupe. IBM needs to take a look at what Tintri is doing from a software perspective and either copy or buy it.
-- DIsclaimer: EMC'er here --
Data services are not feature of this platform. Great speed is: 10m IOPS in 5 RU, at 100ns latency. This platform expects the protection to happen at the application layer (e.g. Oracle Data Guard, Flashback DB), or to have RO data sets on it (for say, Hadoop).
As a previous poster has stated, this is not a general purpose storage platform. This is to let your business do things that weren't possible before. You can only put so much memory in servers, and then this memory is not addressable across servers as a shared pool (without 3rd party hacks). So how to load massive datasets into 'memory' to perform close-to-realtime analysis? How do you change someone's behaviour in the act of doing something? (think a shopper walking past a display, you have their phone number and location, and you want them to buy). You need something that can perform lightning-quick, and this is what DSSD is. I prefer to think of it as a shared memory extension for servers, rather than a storage platform.
There are examples of customers who are using DSSD to get Exadata-like performance for their Oracle environment (42GB/s in 5 RU vs Exadata 28 RU), but without the fancy DB tuning/layout required with Exadata. And cost savings :)
There are 3 ways to use DSSD from Day 1:
* As a block storage device on linux (more *nix platform support to follow later).
* As a Hadoop data node (using the Cloudera plugin). More plugins to follows post-GA. Going with this approach means you can de-couple compute from storage, so you are not having to add storage that is embedded in your computes nodes, each time you add compute (whether you need the storage or not). And vice versa.
* via the FLOOD API, so you can make calls directly from your app.
PS: I hate the phrase "game changing". Pet peeve I guess.
Kudos to EMC for getting DSSD to work. Questions remain about the non-standard connectivity: From my own experience at IBM working with low-latency attach ("CAPI" = cache coherent memory mapping), EMC will need to offer custom-designed applications with it to sell DSSD in sufficient quantities. Hadoop is a low-hanging fruit from a technical point of view, Spark would be more challenging.
DSSD attachment is a bit like CAPI attachment for the bare IBM FlashSystem (to Anonymous Coward -2: Yes you can also have FlashSystems without SVC code layer and without FC, 90 ms latency)... It's exotic. Any SAN data center will look for FC-compatibility, with InfiniBand covering special requirements. DSSD, CAPI and clones remain a scientists playground - unless you can offer the integrated cluster appliance with it. Also you'll need to make every integrated application aware of memory-mapped storage instead of volume-mapped storage.
But then the discussion is different: A NoSQL Flash appliance like IBM's "Data Engine for NoSQL" - or potential future EMC integrated devices - will NOT compete with storage. Instead they will compete with "in-memory" databases, the most popular deployment model for low-latency analytics. And the break-even point is not at one or two servers, considering the hefty price of DSSD gear and non-standard hardware. For reference, we designed the 4U-high IBM Data Engine for NoSQL to compete with 24 generic x86 servers running in-memory NoSQL (40+TB). That's not commonplace yet, even for Apache Spark. And it's more a discussion about energy cost versus deployment agility.
My guess is that EMC will rather go the mass-market compatible way and insert DSSD into established storage products, replacing old bus technology in VMAX, XtremIO etc. Not a game changer, but a reasonable progress. It's just not ready yet.
PS. (to Anonymous Coward -2: call it "hampered" with layers of storage code, but that's market demand. Try to stay below 200 microseconds *including* snap/thin/mirroring/data reduction software; SVC is a good reference).
That's not where I was going. I'm not saying SVC is a bad product. It is actually an excellent product from an architectural perspective, v900 (Flash plus SVC) will easily outperform XtremIO or Pure. I think that is why this EMC product is coming out. They are trying to come up with something that can compete with FlashSystem's raw performance. I didn't mean hampered in terms of performance, that is the one thing that FlashSystem does really well. I meant hampered in the market because SVC's functionality is legacy. Old snap model, little ISV integration, no integrated snap management, limits in snap counts per pair. No dedupe. Slap something like Tintri's functionality in SVC and keep everything else the same, it would be the hottest storage system on the market. It is more a comment of frustration. Update SVC's storage stack and you're great.
No V9000 will not easily outperform Pure or Xtremio. Depends on the workload. FS900 does not compete with the D5. The two should not be compared unless you have what EMC is pitching this thing for--- large active datasets that are always hammered. This thing was made for applications like the shit in the snowden leak. You know, "capture everything and analyze everything in near real time" applications. This is also extremely useful in the financial world and also AI. This type of tech will change the datacenter as much as Flash on SANs has. On the high end. And use of high end applications will grow. Eliminating the network does present some issues but they can be worked around. There's probably long going to be a market for SAN storage for stupid things like monolithic websites and sharepoint and photos and docs and payroll databases and crap but the world is moving to automation and AI and instant analytics. It may take a little while but it's already happening.
The way this will be sold is strategically. As I've already said and others have said, it will require specific architecture to take advantage of the tech. These large programs and $500B companies are ready. Trust me, when your organization is large or powerful enough, like a mountain generates its own weather, it generates its own economy and technological ecosystem.