Re: Old tech solved this decades ago
"The solution used two i/o request queues for each disk plus a record of where the heads were and which direction they were moving"
for this to have ANY kind of performance advantage, you'd need an OS that's aware of this feature, and could internally prioritize things for best throughput. But for desktops, at least, I'm pretty sure that Windows operating systems force a physical write to the disk way too often, and seem to be more like DOS in their "serialized" way of doing things than like a REAL operating system (say Linux or BSD, k-thx).
maybe a massively parallel or massively "async I/O" process would benefit, sure, but not your average desktop application. Users wouldn't see any difference, in other words, to justify the price.
it's like multi-core in a way - very few, if any, desktop applications even REMOTELY come close to leveraging this. And yet nearly every CPU sold nowadays is dual core or better. And is the *PERCEIVED* speed of those "modern" OSs (say Win-10-nic) any better than, say, XP or 7? *NOT* *BY* *MY* *MEASUREMENTS* !!! [and Micro-shaft's "OS background schtuff" is at LEAST a little disturbing anyway]
/me wonders just how much ZFS knows about the way the data is stored on the drive...