Game Changer?
Whilst I would certainly agree tha the current generation of storage array simply isn't up to getting the best out of the capabilities of an SSD as the internals and interconnects are simply not powerful enough, then I think there's good reason why storage arrays will not disappear and why a revolution in datacentre interconnect is going to happen any time soon.
The issue is around shared storage (rather than dedicated, which can be attached direct to server I/O buses). Sharing storage across multiple servers, whether it is by a shared array requires some form of network interconnect. It doesn't matter if that sharing is peer-to-peer, or central array. Something has to connect the storage services to the client systems. That network has to be capable of working over moderately long distances, perhaps 100metres or more, it has to be highly resilient, capable of dynamic reconfiguration without disrupting service, handle switching and routing of requests, and most importantly, must be widely supported - indeed at some levels, it has to be virtually ubiquitos. It also has to be very highly scalable.
The fact is that there aren't too many technologies to choose from for high performance shared network. You can forget wireless. It's far to slow and unpredictable. There are a few short distance interconnect technologies, such as FireWire, USB, SATA and so on that are moderately fast, but really not significantly more so than the main data centre interconnects which are, of course, Ethernet and Fibrechannel, Ficon and, or course, the converged Ethernet/FC standards (with a nod to Infiniband, although). There have been a few proprietary clustrer interconnect technologies, but these all have very limited support with nothing like open standards.
On those protocls above, we have real networks in the 10Gbps region available, with movement towards 100Gbps. In truth, there aren't many things out there which can properly support data rates like the latter. 100Gbps may only be the combined throughput of a dozen enterprise SSDs, but there's precious little out there that could deal with that amount of data so quickly as a single data source (aggregate bandwidth is another issue entirely).
I would argue that for most mainstream data centre applications, if the I/O latency dropped from the typical 6-8ms you might see for a random read on an array with entrerprise disks to the roughly 0.5ms you might expect to see with SSDs and a well designed modern I/O network and server technology, then that 100-fold improvement in latency will shift the bottleneck somewhere else. That's quite probably in processing, lock contentions or any number of other places.
Now there is no doubt that storage array suppliers are going to have to do something about the total throughput capability of their of the way they handle storage so that they aren't the bottleneck (even top end arrays can struggle once then get into IOP rates in the several 100s of thousands a second, or data rates in the 10Gigabyte per second region), However, I think for most data centre apps, an increase in capability of an order of magnitude would be ample, and we aren't in the territory where there would be much benefit in increases of 100 times that.
Of course there will always be the hyper-scale configs, like Google or (maybe) clouds, but those are also exercises in software engineering and very few organisationshave the need or resources for that.
I for one would be very suprised if, in 10 years time, the data centre storage interconnect didn't still rely on evolved versions of Ethernet and FibreChannel connecting to centralised storage facilities which may implemented a bit differently, but are still recognisable arrays. Datacentres evolve, they don't suddenly move to a new generation. The old has to work with the new. Arrays of some sort are here to stay.