I think you're half right
For reasons I cant fully disclose I believe the underlying costs of using commodity "built to fail" components using the triple mirroring techniques typically found in the offerings used by hyperscale cloud providers could be improved upon significantly, while still providing good margins for providers of more efficient technology.
There are also some interesting economic / consumption models that also indicate that on-premise technology is and will sustainably be cheaper than using cloud providers, and the scale at which that happens is much smaller than most people would assume (I've seen some figures that that point is currently at an annual storage capacity spend of as little $250K). This depends of course on internal IT adopting similar state of the art automation techniques that the hyperscale cloud vendors use (a reasonably big "it depends" I might add), but it points to a likely situation where a mixed model for IT being a somewhat permanent feature of the datacenter landscape in much the same way that there is a mixture of permanent/contract/outsourced personelle, though the exact mix changes depending on current circumstance.
If the 250K capacity spend figure is accurate, then smaller scale "private storage" requirements where "non BigCloud" storage vendors will still have value. Indeed, I suspect there will be a number of "small cloud" vendors out there, especially as SaaS vendors grow sufficiently large to justify building their own infrastructure like Zynga did, and they will probably decide to innovate in areas outside of infrastructure, and will probably rely on technology developed by the existing storage vendors (or at least some of them).
Sure some applications that are completely homogenous like email will end up almost completely with the "BigCloud" hyperscale vendors, but after you cherry pick those ones out of the datacenter, there are literally thousands of custom applications left that IT managers still need to support, and its going to take a long time for those to get re-written/ported to a cloud platform. As a case in point there are plenty of applications out there that are not running on virtualised servers today, and probably never will be.
Eventually they will all get re-written, and when they do it is likely that those applications will initially be serviced on BigCloud IaaS platforms, but eventually many of these guys will probably migrate to "Small Cloud" IaaS vendors who can provide more finely tuned SLAs that allow those SaaS vendors to differntiate themselves from the copycats who are busy trying to disrupt them using exactly the same underlying infrastructure.
Lastly, demand for storage capacity is not limited by our ability to generate data, but rather by the costs and convenience of storing it. Build a sufficiently cheap and convenient way of storing data and it will get filled, AWS proved that nicely even though the amount they charge (based on 11c/GiB/Month = $4K+/TiB over 3years) is currently higher than equivalent costs of entry level low SLA storage from the likes of EMC, NetApp etc. The genius of AWS was allowing people to buy it by the GiB per month on demand, and make it good enough, and make it blindingly easy to consume.
Cloud isn't nearly so much about total price as it is about convenience, lower risk consumption models, and outsourcing the management of something people would rather not manage if they could get away with not having to. IF you solve the provisioning/managment problems and alter you business models a little (surprising little), even current technology from the major storage vendors can be as compelling, if not more so, than the cloud vendors.
One thing is certain though, the next 5 - 10 years will look NOTHING like the last ... but we've all known that for quite some time now ... and everyone in the storage industry will be living in "interesting times". Dont write the existing vendors off just yet, the party isn't over by a long shot, and the fat lady hasn't even decided which aria she's going to sing.
Regards
John Martin