Nope, it's idiot companies that didn't bother to check how the web worked
As previous commenters have already mentioned, robots.txt and other methods of preventing search engines indexing web content existed long before Google.
The web was invented as an OPEN medium (it was never intended to be commercial - it was for the free and open exchange of academic information), without barriers, and has always been that way unless you put up your own barriers, which isn't hard to do in this case - well-established mechanisms have been there almost from the start.
What you are saying is the equivalent of the person who wanted to cover the world with leather rather than wear shoes. The web doesn't work how corporate lawyers with mid-last-century mindsets think it should work. Tough. Nor can they change the colour of the sky etc etc.
Nobody made these companies put their content on the web, but having chosen to do so, if they didn't want their content indexing, they should have used the standard methods of accomplishing that - robots.txt/restricted access etc. You can't say no-one should look at your content on an open medium without adding your own restrictions, any more than you can say you want the sky to be green because that's how you think it ought to be.
But instead these companies want Google and other search engines to publicise their work for them and then (instead of paying the search engines for their work) have the search engines pay them. They know search engines cache content, but still want them to index their stuff and then bitch because it's cached.
I'm not particularly a fan of Google, and do think they have a cavalier attitude to copyright when it comes to digitising books and trying to snaffle the rights to "orphan works" in particular, but in this case the companies involved just want to have their cake and eat it, and Google has been stealing nothing. In fact it's been doing them a favour, but they are too greedy to face that fact.