Success factors for low-latency attachment technologies
On a more technical note, launching a new low-latency attachment technology like DSSD is not a straightforward job. I started marketing CAPI-attached Flash at IBM two years ago, and the outcome was NOT some general purpose fast Flash storage. Read below.
CAPI, never heard?? That's because it is only prominent in big number crunchers in finance, life sciences and research: CAPI links graphical coprocessors or arithmetic FPGAs into CPU cache lines, but it can also emulate memory pages. Hence the name Coherent Accelerator Processor Interface. Cool technology. (It doesn't get much quicker than this.)
The outcome was that as you don't find common applications that understand new protocols like CAPI, NVMe or now DSSD, you'll have to write your own! Luckily some industries were just waiting for that external memory which feels like "nearline RAM" as it bypasses the 20.000 lines of code in the storage access layer, but comes at a fraction of the cost of true RAM.
We went for the most popular rising NoSQL database, Redis, which became "BigRedis" as CAPI-Flash support was added by Redis Labs. In 2014 we announced the first IBM "Data Engine for NoSQL", a pure Redis appliance in 4U height, 2U for Power Linux and 2U for an Enterprise-grade CAPI FlashSystem. It corresponds to 24 rack servers of in-memory database up to 40TB, but only producing 5-10% of their heat output.
The real question is, how big is this [gaming/medicine/web trade/geo science] niche of high capacity in-memory applications that will be upgraded to support "memory-style Flash" with energy consumption in mind? The (expensive) alternative is "cloud-style" deployment on an increasing number of general purpose servers which know just RAM and storage. Energy costs will decide.
More from Julich supercomputing center blogging on openpower.org... http://openpowerfoundation.org/industry-coverage/julich-tag-teams-with-ibm-nvidia-on-data-centric-computing/