Reply to post: What the!?!?!?

Huawei developing NMVe over IP SSD

CheesyTheClown

What the!?!?!?

What is the advantage of perpetuating protocols optimized for system board to storage access as fabric or network access?

Bare metal systems may under special circumstances benefit from traditional block storage simulated by a controller. It allows remote access and centralized storage for booting systems. This can be pathetically slow and as long as there is a UEFI module or Int13h BIOS extension there is absolutely no reason why either SCSI or NVMe should be used. Higher latencies introduced by cable lengths and centralized controllers make use dependent on unusually extensions to SCSI or NVMe which are less than perfect fits for what they are being used for. A simple encrypted simulated drive emulation in hardware that supports device enumeration, capability enumeration, read block(s) and write block(s) is all that is needed for a network protocol for remote block device access. With little extra effort, the rest can be done with a well written device driver and BIOS/UEFI support that can be natively supported (as is more common today) or via a flash chip added to a network controller. Another option is to put the loader onto an SD card as part of GRUB for instance.

The only reason block storage is needed for a modern bare metal server is to boot the system. We no longer compensate for lack of RAM with swapping as the performance penalty is too high and the cost of RAM is so low. In fact, swapping to disk over fabric is so slow that it can be devestating.

As for virtual machines. They make use of drivers which translate SCSI, NVMe or ATA protocols (in poorly designed environments) or implement paravirtualization (in better environments) which translate block operations into read and write requests within a virtualization storage system which can be VMFS based, VHDX based, etc... this translation then is translated back into block calls relative to the centralized storage system. Where they are translated back to block numbers, then cross referenced against a database and then translated back again to local native block calls (possibly with an additional file system or deduplication hash database) in-between. Blocks are then read from native devices in different places (hot, cold, etc..) and the translation game begins in return.

NVMe and SCSI are great systems for accessing local storage. But using them in a centralized manor is slow, inefficient and in the case of NVMe... insanely wasteful.

Instead, implement device drivers for VMware, Window Server, Linux, etc... which provide the same functionality but while eliminating the insane overhead and inefficiency of SCSI or NVMe over the cable and focus instead on things like security, decentralized hashing, etc...

Please please please stop perpetuating the "storage stupid" which is what this is and focus on making high performance file servers which are far better suited to the task.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon