fibre and now fabric
Who else will IT annoy with these borrowed phrases - we're already not liked by Engineers over the use of 'workshop' and others.
NetApp will bring disruptive NVMe-over-Fabrics technology to its customers in a non-disruptive way. NetApp chief evangelist Jeff Baxter gave a presentation at the Flash Memory Summit last week explaining this. He said applications such as artificial intelligence, machine learning and real-time analytics demand lower latency …
Fabric is a well known term thats been around since at least the advent of Fibre Channel .. the main difference between a "Fabric" and a "Network" is that a network is generally a made up from a hierarchy of switches with the top layers oversubscribed .. its usually optimised for north south traffic. A
A fabric on the other hand is built where each node is more or less directly connected to the other nodes in a non-blocking way, while I am not a Network Expert, I believe these are also called CLOS or Spine-leaf networks. They are generally optimised for east west traffic
So really a fabric is just a special kind of network, but the use case is sufficiently different to the way the vast majority networks have been designed, that they're really in a class of their own.
Back in the good old days, those east-west networks were called a SAN which stood for Server Area Network .. it just so happened that directly connecting servers across a non-blocking network architecture didn't have much of a use case outside of HPC, but using it to overcome the limitations of SCSI cabling was a no-brainer ,, and so it became a "Storage Area Network" ... with scale out everything (Storage, HCI, Sharding, Map-Reduce etc etc) and RDMA these fabrics, (built from Ethernet, Infiniband and Fibre Channel) are all finding their own. I think it was Bob Shugart who was so threatened by IBMs SSA technology that he dug up this odd HPC tech (Fibre Channel, which was meant to replace something else called FDDI), and then cut it down to create FCAL and then rallied an "anyone but IBM" standards body around it .. and lo and behold .. SNIA and the entire SAN industry was born. But that's all ancient history now.
Also, whats the matter with the word Fibre ? .. unless you're bent out of shape about the spelling ? And theres actually a good historical reason why Fibre Channel doesn't use American spelling.
Can't wait to see how NEtApp is going to drag down the benefits of NVMe-oh-WTF to the lowest commen denominator, which is ON(FAS)TAP.
The only way you can stop another company from acquiring NetApp is to continiously come late to market and to deliver sub par products.
After all, the party must go on !
The biggest takeaway at Flash Memory Summit was the new form factor proposed by Samsung. Note it was all about cramming density into an enclosure, but also note the connection was NVMe (a modified M.2 interface).
All SSDs, regardless of media type (NAND TLC, NAND QLC, 3d-Xpoint, SCM, etc.) will use NVMe. Of course NVMe will be available for SCM non-volatile memory, but more traditional NAND will go that direction as well. The U.2 and M.2 connectors make it straightforward, the new CPUs have plenty of PCIe lanes, and what they do not have can be made up with PCIe switch ASICs.
Laptops and servers are already there (all NVMe).
All-Flash storage arrays will move to using NVMe based drives. AFAs will offer NVMe over Fabrics (NFMf) connectivity options alongside legacy SCSI based protocols. Users will be able to carve out a SCSI LUN or an NVMf Namespace as needed.
NVMe over Fabrics using FC (FC-NVMe) will likely take over Fibre Channel connectivity over the next 5 years. It is evolutionary, and can reside alongside SCSI over FC. As new OSs support FC-NVMe, they will connect using FC-NVMe. Legacy systems connect using SCSI.
Optane and other Storage Class Memory technologies will be used as a caching tier, perhaps as both a read and write caching tier. This may be necessary with QLC NAND to allow efficient use of the technology. I am guessing 99% of application storage requirements should be able to be met with SCM cached NAND based arrays. Pure SCM based external storage will remain an esoteric tier.
The only big question is the Ethernet world. NVMf over ROCEv2 seems the most likely solution. As I understand it, ROCEv2 supports IP addressing. But will it be as easy to use as iSCSI? That needs to happen. The industry needs a simple, straightforward way to connect to NVMf storage arrays.
FCoE is likely to finally die if NVMf over Ethernet gains traction. It may be available as a legacy connectivity technology, alongside iSCSI, but I can see no advantage to FCoE over NVMf over ROCEv2.
I agree that FC-NVMe (or I've seen NVMeFC) will be an early winner with strong Brocade support and simple story for enterprises with existing FC skills. ROCE and iWARP over 25/100g ethernet is still not fully baked or the battle won so RDMA over 25/100g ethernet is still unsettled. Not to mention 25/100g ethernet is still a bit of a unicorn in the field. Eventually the service providers and hyperscalers will push an ethernet solution into the forefront but I can see FC hanging on again because of enterprise.
The new connectors are great but leave much to be desired as far as a CRU (customer replaceable unit) for enterprise storage manufacturers. I lave the density but it will be a challenge to create serviceable hardware designs. I'm looking forward to innovation in this area.