
Great article
I had wondered what all this SCSI Express murmuring was all about!
A flash device that can put out 100,000 IOPS shouldn't be crippled by a disk interface geared to dealing with the 200 or so IOPS delivered by individual slow hard disk drives. Disk drives suffer from the wait before the read head is positioned over the target track; 11msecs for a random read and 13msecs for a random write on …
The SSD in my netbook plugs into the motherboard (or some daughterboard - haven't checked) via what I believe is a mini-PCIe socket. Now I'm not claiming that's a suitable replacement for plugging your expensive SSDs into your expensive server hardware...but is my netbook doing something that could be theoretically scaled up, or is it doing something dumb like ATA over PCIe and the host controller is somewhere on the other side of the socket?
Real PCIe would not require special bios or OS support, there is no way to directly interface PCIe with a slew of flash chips so you still have to have a controller which can report itself as a bootable device conforming to standards. Same is true if you pop a SATA controller card into a PCIe slot, it's not what the card actually does, it's how it represents itself to the outside world logically and with I/O requests. For example OCZ company makes a Revodrive product that does this though it uses more PCIe lanes (and costs as much as an entire netbook).
With the typical netbook PCIe SSD, the performance is horrible due mostly to a slow controller with little if any DRAM cache, and little if any parallel flash chip access... there simply isn't enough room on the card for many chips unless you start stacking them but then the cost gets beyond the price point of a netbook.
Now there's a name I haven't heard for a decade or two, not since the old SCSI interface on my towerified Amiga 1200. Brings back fond memories of huge ribbon cables wide enough to drive a London bus on, that put the piddling little parallel IDE cables to shame. Looking forward to using SCSI again, if only for old times' sake!
In the home user world it hasn't been populr for some time, but those of us in the professional world have been using SCSI in the intervening years, sometimes on a parallel bus, sometimes over a Fibre Channel bus, sometimes we use the protocols of Fiber Channel on top of a Ethernet transport, but mostly now we use Serial Attached SCSI.
This is just another in a long line of SCSI transports we have used contentiously over the years, albit a nice one. (SCSI has always been a much better engineered protocol then ATA, and it much better suited to the unique properties that SSDs can bring to the table)
Great article, I think you're exposing the next big debate in storage. Your point on scale out architectures that depend on Ethernet connectivity will limit solid state performance is right on. I spent several years working for a scale out storage vendor and am very familiar with that shortcoming. That brings up an interesting issue, will any of these new interfaces will allow connections between systems over moderate distances to enable scale out without limiting solid state performance?
It does cut out (some of) the middlemen, but it still presents as a SATA interface with a hard drive attached and all of the associated overheads, have a look at your control panel (or whatever), you'll see a SATA interface (Sandforce?). Seeking is blisteringly fast, but transfer rate isn't much better than my 15k SAS array (and of course mine is much cheaper per Gb).
SCSI Express seems to be a way to allow fast SSDs disks to be (hot) plugged into SAN controllers without having to rewrite the software on the SAN controllers, while still permitting SAS disks to be plugged into any slot.
Nvme seems to be more aided as plugging fast SSDs disks directly into PC and servers without adding match to the cost of the SSD disks or servers.
So it is a bit like having both SAS and SATA interfaces at the two ends of the market.
I did an interesting experiment on a DL360, the disk slots are SATA (regardless of protocol), anyway I've a 4 disk 10K 300GB each in RAID 0 of SAS; the spare slot I put an OCZ agility 3 60GB drive in a caddy and slotted it in the server - create it as a logical disk etc.. Ok, its only SATA 1 so 1.5Gbits/second but I still put the RAID array to shame on physical IOps.
What I'm saying is that we have the facility of using and getting the perf from SSD's now using existing protocols, random IOPS with a sub-millisecond latency is what I'm after as a database specialist.
Is it a conspiracy that the controller is limited to 1.5Gbits for SATA drives? If it was SATA 2 I'd have 300MBytes per second per drive with IOPS sub-millisecond, SATA 3 600MBytes per second.... all on one drive! Just negates the need for so many 15K disks so the vendors start losing money; ever wondered why the price of enterprise SSD drives to go in your kit are like 5x + the price of commodity ones that actually out perform!
T