
"lossless Mellanox Ethernet"
I didn't know regular Ethernet was lossy. Do I need gold-plated cables as well ?
IBM Spectrum Scale (GPFS) started out as a parallel access filesystem for disk-based arrays – so some may have expected it to fall over and die in the face of lightning fast access NVMe SSD and NVMe fabric access arrays. But instead it seems to be getting a new lease of life as a data manager atop said fabric-access NVME …
Sigh. Yes, regular Ethernet is potentially lossy. No, gold plated cables won't help 'cos there are so many other funky ways and places that frames could go missing (like cheap switches that can't support all ports flat out at wire speed).
Lossless Ethernet is intended to guarantee frame delivery so's you don't have to add in things like error checking and retransmission further up the stack. Different animals.
As long as storage can be mapped into a *nix device, Spectrum Scale can use it.
What Spectrum Scale can achieve is not just high speed access, but very high bandwidth to single filespaces by multiple clients. Historically, it has achieved this by a high degree of parallelism across relatively slow (disk speed) storage.
That's why it is popular in supercomputer installations where speed and file-store size are both important.
What using NVMe will do is reduce the latency, although increasing the individual device read speed will help reducing the amount of parallelism that is needed to obtain the required performance.
Spectrum Scale also allows managed tiered access to storage of difference performance.
The art will be to organize it to get the maximum benefit from that speed.
I think what the author is highlighting a modern use case for a 10-year old+ technology, at the moment storage veterans appear to be besides themselves salivating over anything NVMe like it's the best thing since sliced bread, whilst IBM have been taking it in it's stride
Notwithstanding it's many video surveillance cases, it's also part of the engine powering the world's fastest supercomputer.