Hi Lusty, I'm sorry, but you're wrong. While ethernet is a possibility for server SAN interconnect, it is by no means the required interconnect. Infiniband is quite popular for latency-sensitive deployments, and direct PCI-E interconnect (see: A3Cube) is also available, and works quite well, thank you.
You might also consider things like "write double local, confirm back to application all while sending data to second node, mark second local write as erasable once second node confirms." Throw in the the fact that this allows for write coalescing in high transaction environments, or vendors like SimpliVity that do inline deduplication and compression - thus are only sending change blocks between nodes, because everything is deduped and compressed before being committed - and you realize that there are a half dozen schemes to drop data volume between servers while preserving write integrity.
Also: the costs on server SANs are dropping dramatically. Look at Scale Computing or Maxta. The downwards pressure has begun in earnest. What's more, as they manage to drive down their CPU/memory usage requirements the toll on your virtual infrastructure is far less. To the point that I seriously doubt you'll get the same amount of storage and the same IOPS with the same latencies from centralized storage vendors. And I can pretty much guarantee you won't 5 years from now, as server SANs commoditize storage for good.
Also also: server SANs are starting to address the issue of CPU usage for storage. A great example of this is SimpliVity's FPGA for inline deduplication and compression. It works, it works well.
Additionally, this statement: "Anything the server SAN guys say to the contrary is from their "testing" which ignores data consistency issues completely in favour of better stats. EMC, NetApp, HP, HDS never ignore data consistency for their tier 1 systems even in testing, hence the apparent difference to the layman." is pure FUD. Not only is it FUD, it's insulting FUD. I absolutely agree that one of the server SAN vendors - and a prominent one - has this problem. The rest emphatically do not.
More to the point, having devoted two years of my life to learning every facet of these systems, I do not appreciate being called "a layman". I promise you, I know more about server SANs than you do...and based on your level of interest and usage of FUD, probably more than you will in the next five years.
The thing about server SANs is that they are not "one size fits all". They can be configured differently for different requirements. Different balances can be struck with them, and tradeoffs consciously made.
Also: "As for using volatile memory for storage, the same is true - yes it's quicker, but only in the same way as strapping solid fuel rockets to your car. Survival rates are considerably lower in exchange for a faster ride."
This is an rare configuration, at least for writes. (Though there is one vendor in particular I know advocates this and insists on calling themselves a "server SAN" when they're nothing of the sort...)
I do see it in server SAN configurations tweaked for VDI. Ones where the node in question will not be storing the golden master or differencing disks, and they are obsessed with cramming every last VM in there. I don't agree with it, but I do know the vendors that do it and they are very up front about the risks.
Long story short: you're working on a whole lot of FUD. If there is one valid concern in the whole lot it is that no single server SAN vendor has yet addressed all of these issues in a single product offering "off the shelf". (The major stumbling block being that most of them choose to stick to Ethernet for simplicity reasons...but that's changing, and I've seen deployments using infiniband from most vendors...and several are looking into PCI-E interconnects for 2015.)
That said, I happen to know of at least four different models that are in development from different vendors that will address everything you raised (and a few other issues) in 2015.
Centralized storage - especially centralized storage costing $virgins from the majors - is simply non-requisite. There are far cheaper alternatives available today, and they are selling like hotcakes. I highly recommend you put down the vendor "war cards" and take some of the high end server SAN offerings for a spin. You'll be pleasantly surprised.