Cloud first networking?
This could be huge for cloud providers, of course all these ssd's needs connecting somehow.
What a way to sell more network kit. Prob a better way to sell more ports than building your own servers.
Analysis Huawei is developing an NVMe over IP SSD with an on-drive object storage scheme meaning radically faster object storage and a re-evaluation of what object storage's very purpose. At the Huawei Connect 2017 event in Shanghai, Guangbin Meng, Storage Product Line President for Huawei, told El Reg Huawei is developing an …
What is the advantage of perpetuating protocols optimized for system board to storage access as fabric or network access?
Bare metal systems may under special circumstances benefit from traditional block storage simulated by a controller. It allows remote access and centralized storage for booting systems. This can be pathetically slow and as long as there is a UEFI module or Int13h BIOS extension there is absolutely no reason why either SCSI or NVMe should be used. Higher latencies introduced by cable lengths and centralized controllers make use dependent on unusually extensions to SCSI or NVMe which are less than perfect fits for what they are being used for. A simple encrypted simulated drive emulation in hardware that supports device enumeration, capability enumeration, read block(s) and write block(s) is all that is needed for a network protocol for remote block device access. With little extra effort, the rest can be done with a well written device driver and BIOS/UEFI support that can be natively supported (as is more common today) or via a flash chip added to a network controller. Another option is to put the loader onto an SD card as part of GRUB for instance.
The only reason block storage is needed for a modern bare metal server is to boot the system. We no longer compensate for lack of RAM with swapping as the performance penalty is too high and the cost of RAM is so low. In fact, swapping to disk over fabric is so slow that it can be devestating.
As for virtual machines. They make use of drivers which translate SCSI, NVMe or ATA protocols (in poorly designed environments) or implement paravirtualization (in better environments) which translate block operations into read and write requests within a virtualization storage system which can be VMFS based, VHDX based, etc... this translation then is translated back into block calls relative to the centralized storage system. Where they are translated back to block numbers, then cross referenced against a database and then translated back again to local native block calls (possibly with an additional file system or deduplication hash database) in-between. Blocks are then read from native devices in different places (hot, cold, etc..) and the translation game begins in return.
NVMe and SCSI are great systems for accessing local storage. But using them in a centralized manor is slow, inefficient and in the case of NVMe... insanely wasteful.
Instead, implement device drivers for VMware, Window Server, Linux, etc... which provide the same functionality but while eliminating the insane overhead and inefficiency of SCSI or NVMe over the cable and focus instead on things like security, decentralized hashing, etc...
Please please please stop perpetuating the "storage stupid" which is what this is and focus on making high performance file servers which are far better suited to the task.
No storage subsystem (unless it's designed by someone truly stupid) stores blocks as blocks anymore. It stores records to blocks which may or may not be compressed. The compressed referenced blocks are stored in files. Those files may be preallocated into somewhat disk sector aligned pools of blocks, but it would be fantastically stupid to store blocks as blocks.
As such, NVMe is being used as a line protocol and instead of passing it through to a drive, it's being processed (probably in software) at fantastically low speeds which even SCSI protocols could easily saturate.
There will be no advantage in extended addressing since FCoE and iSCSI already supported near infinite addresses to begin with. There will be no advantage in features as NVMe would have to issue commands almost identically to SCSI. There will be no advantage in software support because drivers took care of that anyway... or at least any system with NVMe support can do pluggable drivers. Those which can't will have to translate SCSI to NVMe.
They should have simply created a new block protocol designed to scale properly across fabrics without any stupid buffering issues that would require super stupid solutions like MMIO and implemented the drivers.
Someone will be dumb enough to pay for it