What a load of........
Seriously I wish companies would at least make claims that were even remotely realistic. The reason some flash over PCIe product has a read latency of 100us is.... well.... because... you know.... it takes 99us to perform the actual read from flash, going across that PCIe link adds a whopping 1us of extra latency. This TeraDIMM thing could be attached to an infinitely fast memory bus, it would only reduce the latency from 100us to 99us.
It can't be that the whole of the flash space is just memory mapped, the TLBs in almost all servers aren't designed for that kind of physical address space, and the DRAM controller will expect responses to all requests on fixed timing, there is no way for a DIMM to go "excuse me can I get back to you, that address you asked for is not in my cache".
And then there is the added power consumption on the DIMM slots that the servers were never designed for. And there is no protocol to tell a DIMM that the power is going out, or to prevent a bit of bad software scribbling all over your "storage".
A graph of latency under mixed read write load, to a wider range of addresses than the 2 or 3 cached addressed probably used to make that latency graph might also be informative.
At the end of the story they admit the flash has to be accessed through a driver stack or as a swap device through the OS so again the latency is going to no better than other PCIe flash devices. In all likelihood the real performance will be worse than other PCIe card flash storage since the power and physical space constraints are going to limit the kind of processing the flash controller can perform to be a lot less than what a controller sitting in a 20W PCIe slot can do.
I know the idea seems cool, but really these TeraDIMMs are TeraDUMM.