* Posts by flashdude

2 publicly visible posts • joined 29 Nov 2016

Future is bright for NVMe-over-Fabrics with TCP and Ethernet, say Solarflare, Lightbits


Pavilion Data Systems also supports NVMe Over TCP

I just wanted to add that Pavilion Data Systems supports NVMe Over TCP as well. You can actually leverage both protocols simultaneously from the same array if required for different use cases. Some customers will use RoCE within a rack, but TCP to access the storage array from other legacy servers that don't have a RoCE ethernet NIC. The NVMe-Over-TCP host protocol driver that can work with any Ethernet NIC will be standard inbox in future versions of Linux, just like the NVMe-Over-Fabrics for ROCE driver is today.

Storage newbie: You need COTS to really rock that NVMe baby


Re: what does that mean?

From what I can gather, " things are done outside the array" might be volume management, and any other kind of data management possibly. If true, it would also mean that the host tier is contributing CPU power to generate storage performance, making this a somewhat hyper-converged solution. This makes sense since it clearly states in the article that a standard COTS server architecture like the type they use in their array on the target side won't generate this kind of performance on its own, even if you add NVMeOF on the front end and NVMe SSDs on the back end, so they chose to spread the processing to the host tier most likely??