So they've discovered Google FS? I thought that was an off-the-shelf thing at this point. The only real difference is that a proper P2P network would have a distributed "name node" structure (think Nutanix here) instead of the single-point-of-failure so common to the earlier implementations of things like Hadoop.
We're just talking about turning an ISP's last-mile network into a giagantic Hadoop cluster which then connects via a Fat Link to some other gigangtic Hadoop Cluster on some other ISP's last mile. (Well, not actually Hadoop, but you get the idea.)
That's (maybe) okay if you are talking about a "mostly isolated network" like xDSL, but this would play merry hob with DOCSIS-based (cable) modems and infrastructure. Google Fibre as the base? Maybe. But when you're at a Google Fibre level of "to the premises," are you really putting much compute/storage/etc in the individual house? If you were sitting on a pipe the size of the Mississippi, then you'd be a perfect candidate for "as a service" streaming and storage of all your data. Once you've that kind of bandwidth, by $deity's sake, toss your non-unique data into the cloud and let someone else deal with the headache of managing and maintaining it.
I am not against fundamental research, but this does seem as though it won't be a "peer to peer" network in the traditional sense. Interesting theoreticals on a CDN, though.