NVMe Caching? Ha! it's more like an NVMeTOO technology brief... Sorry Val, NTAP missed the whole point of why NVMe is relevant, it's end to end App to Data that's important, not CACHING!
El Reg has asked a large number of storage array vendors about their views of NVMe solid state drives and NVMe over Fabrics access to such drives. The combination promises to effectively kill array network access latency issues and make array access equivalent to reading and writing data from a locally attached flash drive. So …
Thursday 9th February 2017 22:05 GMT JohnMartin
How is it "Me Too". when NetApp is currently the only Enterprise array vendor actually shipping standard NVMe devices in its main line of arrays ?
The whole point of NVMe is to improve latency and increase queue depths .. which in turn improves performance. There's a fairly small number of applications (most of them in HPC environments) where shaving off another 50 microseconds in media access time will make a significant difference. Eventually as applications and operating systems learn to use large amounts of persistent memory effectively, NVMe will take its place along side of NVDIMs and in combination with RDMA data transports will significantly change compute / storage architecture.
In the mean time, like most fast media which is relatively expensive the best bang for buck in mass storage will be for it to form part of a storage heirarchy .. first as a cache, then as a tier, and then ultimately replacing the previous generation (in this case SAS and the whole of the SCSI command set ecosystem) completely.
Wednesday 1st February 2017 00:49 GMT Nate Amsden
netapp flash cache
Does it cache writes now? I just tried poking around for some docs but didn't find an immediate answer. Last I read/heard(4-5 years ago) their flash cache was for reads only(for the org I am in where we have roughly 90% writes, caching reads in flash doesn't excite me).
3PAR(I am a customer) does flash caching as well, but their architecture too limits the flash cache to reads(unless things have changed recently). EMCs flash cache could/can do both reads and writes, never used it so don't know how well it works, sounded good though.
I think removal of SCSI overhead from a typical enterprise array will probably not have a noticeable impact on overall performance(as in reduction in enough cpu cycles to do other things, of course in the 3PAR world those operations are performed on ASICs).
But if/when NVMe gets to the same price (maybe +/- 10% even) as typical SCSI/SAS, then there will be little reason not to do it just because ..well because why not. I think while the overhead of SCSI does introduce latency, I also expect it to be more robust. In talking with one NVMe startup CEO and his team about a year ago I was kind of scared the levels that you need to go to in order to get high performance (direct memory access etc), just seems..very fragile.
Wednesday 1st February 2017 02:58 GMT Anonymous Coward
Removing the SCSI Protocol
".... just about any controller today from any vendor will be bottlenecked by more than a few SSDs. "
So removing that massive SCSI Protocol overhead will free up enough CPU Cycles to utilise even fewer NVMe drives ???
"...The raw number of IOPs capable from an SSD is staggering, but the value is limited without storage efficiencies, quality of service, data protection, management and other resiliency technologies that a storage controller provides with its onboard software."
I call BS -
The reason so many customers got caught out by FAS systems falling over were the IOPS created by NetApp "storage efficiency" features built into the Ontap Software. Even the most experienced NetApp Experts couldn't tame the unpredictable workload behaviours of these systems.
Never have I heard NetApp tell a customer: Sorry, your Datastores disconnected because perfstat (performance statistics collection tool) shows your system has too much SCSI Protocol overhead.
Usually there were too many IOPS - please contact sales for another shelf. Back-to-back CP's.... Or certain CPU domains were overloaded. Because even XEON processors with many threads are useless when Ontap sucks at distributing the workload efficiently.
Disclaimer: I worked for NetApp
Wednesday 1st February 2017 03:16 GMT Anonymous Coward
Val is trying to sell NVMe in a FAS box because it it's "somehow better".
Reality is NVMe is better without the FAS. Not because of the flawed technology, but from a marketing perspective.
Clustered Ontap and FAS have left such a sour taste in the customers' mouth, so that Netapp had to buy Solid Fire.
Think of SolidFire as one of those edible food containers. You can eat it if you cannot find a rubbish bin, but the real goodness is inside (NVMe).
Wednesday 1st February 2017 15:36 GMT Val Bercovici
Nate - thanks for your feedback
Yes, FlashCache now has several generations of evolution under its belt and continues to optimize the performance of disk-based Unified FAS arrays. Our All-Flash I/O pipeline is so advanced at the moment that we don't need to double-buffer reads or writes via FlashCache to our SSDs.
OTOH - The main theme NetApp is conveying to the market now is that NVMe shows great promise when fully end-to-end systems will be deployed. Being ready at some of the App / OS / Hypervisor / Host / SAN / Controller / Shelf / SSD layers is helpful and enhances customer investment protection - BUT...
... It's very important not to overhype the technology today and underwhelm customer expectations.
If performance is a priority - independently audited and peer-reviewed benchmarks are the only way to go! :)