"if you want to share Violin arrays across a few servers then you have to introduce a gateway (x86) server which introduces more latency," .... "3PAR can support 200 SSDs without that issue."
If you need 50-100 TB of crazy high performance and need it across many servers, you might need to go to a traditional SAN based array. IBM's RamSan 820 can support 20 TB after RAID of capacity, which isn't much if you are talking about storing boat loads of unstructured files, but is plenty for most of the world's DB/DWs which is where most people want the performance. Violin is probably similar, but I am not as familiar with their products. 3PAR, or any other traditional SAN array with SSD, solves a problem which doesn't exist for the vast majority. How to get extreme performance, relative to disk, across a large amount of capacity in a SAN array. Most people only need 500,000 IOPS with 10ms of latency or in that range for a single or a few DBs or DWs usually running of a single or a few servers. If we get to the point where SSD is the same price as HDD, then I am sure people will take the extra performance for the same or comparable cost to get their random files and other data served at blistering speeds but it is primarily about cost per TB outside of a few critical workloads, not performance.
I think internal is where the performance critical workload data, and maybe all data, will be stored in the future. For the obvious reason that it is the best place to put high performance requirement data (next to the CPU), but also for the reason, as you mention, that people will no longer have a way of managing that data which is distributed across many servers... which sounds like a problem, but it is a problem that the app providers are only too happy to help people solve with additional software products from them as opposed to a storage provider. SAP, Oracle, VMware, MS are already selling products which look curiously like functions that used to be managed by SAN arrays.