So...
Can anyone tell when they are in the 1%?
Google is now using its HTTP-boosting SPDY protocol to accelerate almost all SSL web traffic between Chrome browsers and its many web services, and according to Mike Belshe, an engineer on Google's Chrome team, the protocol is juicing performance by more than 15 per cent on average. "We do not have any property that is not …
The described feature has nothing to do with pipelining. So opera doesnt "do this" and doesn't "work for all sites". Servers have to support pipelining too, for a start.
Pipelining lets you send multiple GET requests within the same TCP packet. That's not typically a bottleneck in downloading content for a website - how fast you send GET requests. Most large scale sites distribute images, script etc across multiple domains or subdomains, hosted typically on different server farms. These requests can be sent independantly of each other anyway, since they're requests to different domains. So it's debatable how much pipelining "speeds up" loading those sites.
You don't seem to get what the article is describing .... it's not pipelining, that's for sure...
*Protocols* used in the WWW should NOT just be "open-source and free".
They need to be interoperable and so have a published standard. That's what the IETF and RFCs are for. The whole Internet was built on this principle, otherwise we'd have a mix mash of proprietary protocols with a really crappy common ground.
If Microsoft did this a few years back to push the performance of Internet Explorer, their search engine, maps and Hotmail, the Internet community would be up in arms. Why should Google be treated any different?
If Microsoft done this a couple of years ago to push the performance of Internet Explorer and hotmail, they'd have done so in a way that would have made it purposefully difficult for others to implement.
Google have on the other hand, made a reasonable effort to make it reasonably easy for others to re-use the work it has done for SPDY, and if you don't want to use SPDY, then the SPDY enabled part you're communicating with can just fall back to regular HTTP.
I think that they've published specs, a whitepaper, sample implementations ( in the form of chromium, and mod_spdy ), earn them the right to be treated differently from Microsoft.
Also, I'm not sure if google have a patent on this protocol, but if so, then the fact that it's less likely then Googles track record regarding Patents, if nothing else should earn them the right to be treated differently to Microsoft.
Google has published a full spec of the protocol, this means that others are capable of implementing it fully and compatably. If you choose to not use SPDY things will still work, albeit not so quickly.
The MS way is to extend a standard in a way that is not completely documented, then fail interoperation if the extensions are not used.
Note that it is still under development. I would hope/expect to see an RFC come out of this at some point.
If you read the spec you'll see it's far from full. There are too many holes on it, even many parts still saying TODO.
It may be here on day, but it's not now. They seem to move very quickly at implementing it in their products but reallyyyy slow when it comes down to putting into a document.
And while this goes on they are already using it unfairly to push their own products.
SPDY frequently asked questions
Q: Doesn't HTTP pipelining already solve the latency problem?
A: No. While pipelining does allow for multiple requests to be sent in parallel over a single TCP stream, it is still but a single stream. Any delays in the processing of anything in the stream (either a long request at the head-of-line or packet loss) will delay the entire stream. Pipelining has proven difficult to deploy, and because of this remains disabled by default in all of the major browsers.
Source: http://dev.chromium.org/spdy/spdy-whitepaper
"Because we can identify issues at the application layer, it made sense to try to address those first, develop an application protocol that works really well, that people agree works really well, and then start to dig down"
Dig down? Next down is the Transport Layer, where TCP lives.
Looking forward to Google implementing their own poorly documented and non-standard TCP replacement to make their own browser, services AND operating systems appear faster at pushing ads than others.
But hey it's all free, right?
I'm not sure if you were being serious, but even Google would have a hard time changing anything at IP level unless they planned to run their own ISP.
Transport layer is however - as Google themselves now hinted - a viable target.
There's a mention of SCTP on another comment, but from this discussion on the SPDY list:
http://groups.google.com/group/spdy-dev/browse_thread/thread/785677f6bf984da3/0dd575194b0b9556
Google doesn't seem to think that SCTP gives enough of an advantage over SPDY+TCP. I can almost feel the gears in their heads coming up with something of their own.
After all there were already solutions similar to SPDY, like BEEP, but Google choose to go on their own instead of working to improve those.
They could also fix their server to return the correct file types:
1307121630: Resource interpreted as stylesheet but transferred with MIME type text/html.
1307121630: Resource interpreted as script but transferred with MIME type text/html.
search.json: Resource interpreted as script but transferred with MIME type application/json.
Seems the "experts" are just the ones drinking the cocktails.
Time stamp the files/frames in the local cache using a heirarchical approach. Then starting at the top, compare the time stamps. This can even be rolled out to the nodes along the network to act as a whole-web accelerator for all the most popular webpages. Implemented correctly, this technique would also eliminate stale pages (because the time stamps would always be compared without assumptions).
The present architecture loads and reloads and reloads again and again and again. Tedious.
How does that work for HTTPS or did you miss the "to accelerate almost all SSL web traffic " bit of the article?
Also I'll take slower loading web pages instead of anything that breaks the end to end principle of the Internet even more than NAT.
I can see that idea failing miserably either by technical fault or human factors.