back to article Google sees 15% speed boost with HTTP tweak

Google is now using its HTTP-boosting SPDY protocol to accelerate almost all SSL web traffic between Chrome browsers and its many web services, and according to Mike Belshe, an engineer on Google's Chrome team, the protocol is juicing performance by more than 15 per cent on average. "We do not have any property that is not …


This topic is closed for new posts.
  1. John Robson Silver badge


    Can anyone tell when they are in the 1%?

    1. LaeMing

      Possibly isn't any 'one'

      but 1% of transactions spread across everyone??

  2. Paratrooping Parrot


    But what is the security like? I hope they have not done any compromises on it.

  3. SteveBalmer

    Meh, Opera Pipelining does this and works for all sites.

    Opera - Still the ONLY browser to run HTTP Pipeling enabled by default on all sites... And you don't need silly server-side code for it to work...

    1. Anonymous Coward


      The described feature has nothing to do with pipelining. So opera doesnt "do this" and doesn't "work for all sites". Servers have to support pipelining too, for a start.

      Pipelining lets you send multiple GET requests within the same TCP packet. That's not typically a bottleneck in downloading content for a website - how fast you send GET requests. Most large scale sites distribute images, script etc across multiple domains or subdomains, hosted typically on different server farms. These requests can be sent independantly of each other anyway, since they're requests to different domains. So it's debatable how much pipelining "speeds up" loading those sites.

      You don't seem to get what the article is describing .... it's not pipelining, that's for sure...

    2. Old Handle

      This sound like something totaly different.

      Still, I'm all in favor of any technology that speeds up the web with little or no downside.

  4. Anonymous Coward


    ...another giant using a non approved standard to make it own software seem better.

    Google, the Nu Micorosoft.

    1. Tom 15


      Except or course, it's open-source and free... no one really has an objection with giants coming up with new technologies, it's their job, as long as they're backwards compatible and don't break stuff.

      1. Anonymous Coward
        Anonymous Coward

        @Tom 15

        *Protocols* used in the WWW should NOT just be "open-source and free".

        They need to be interoperable and so have a published standard. That's what the IETF and RFCs are for. The whole Internet was built on this principle, otherwise we'd have a mix mash of proprietary protocols with a really crappy common ground.

        If Microsoft did this a few years back to push the performance of Internet Explorer, their search engine, maps and Hotmail, the Internet community would be up in arms. Why should Google be treated any different?

        1. Arion

          Why Google should be treated differently to Microsoft

          If Microsoft done this a couple of years ago to push the performance of Internet Explorer and hotmail, they'd have done so in a way that would have made it purposefully difficult for others to implement.

          Google have on the other hand, made a reasonable effort to make it reasonably easy for others to re-use the work it has done for SPDY, and if you don't want to use SPDY, then the SPDY enabled part you're communicating with can just fall back to regular HTTP.

          I think that they've published specs, a whitepaper, sample implementations ( in the form of chromium, and mod_spdy ), earn them the right to be treated differently from Microsoft.

          Also, I'm not sure if google have a patent on this protocol, but if so, then the fact that it's less likely then Googles track record regarding Patents, if nothing else should earn them the right to be treated differently to Microsoft.

    2. alain williams Silver badge

      SPDY - proprietatry or not ?

      Google has published a full spec of the protocol, this means that others are capable of implementing it fully and compatably. If you choose to not use SPDY things will still work, albeit not so quickly.

      The MS way is to extend a standard in a way that is not completely documented, then fail interoperation if the extensions are not used.

      Note that it is still under development. I would hope/expect to see an RFC come out of this at some point.

      1. Anonymous Coward
        Anonymous Coward

        No, Google hasn't published a full spec @alain williams

        If you read the spec you'll see it's far from full. There are too many holes on it, even many parts still saying TODO.

        It may be here on day, but it's not now. They seem to move very quickly at implementing it in their products but reallyyyy slow when it comes down to putting into a document.

        And while this goes on they are already using it unfairly to push their own products.

  5. TJ 1

    HTTP Pipelining != SPDY

    SPDY frequently asked questions

    Q: Doesn't HTTP pipelining already solve the latency problem?

    A: No. While pipelining does allow for multiple requests to be sent in parallel over a single TCP stream, it is still but a single stream. Any delays in the processing of anything in the stream (either a long request at the head-of-line or packet loss) will delay the entire stream. Pipelining has proven difficult to deploy, and because of this remains disabled by default in all of the major browsers.


    1. Anonymous Coward
      Anonymous Coward

      Quoting the competition

      nuff said

  6. Anonymous Coward
    Anonymous Coward

    Ahh the Google Wide Web (GWW)

    "Because we can identify issues at the application layer, it made sense to try to address those first, develop an application protocol that works really well, that people agree works really well, and then start to dig down"

    Dig down? Next down is the Transport Layer, where TCP lives.

    Looking forward to Google implementing their own poorly documented and non-standard TCP replacement to make their own browser, services AND operating systems appear faster at pushing ads than others.

    But hey it's all free, right?

    1. Michael 47

      You think that's bad?

      below TCP it's the IP layer. If the thought of google having its own special transport protocol is bad, then the though of having to get your own Google address(tm) to user their services is truly horrifying

      1. Anonymous Coward
        Anonymous Coward

        @Michael 47

        I'm not sure if you were being serious, but even Google would have a hard time changing anything at IP level unless they planned to run their own ISP.

        Transport layer is however - as Google themselves now hinted - a viable target.

        There's a mention of SCTP on another comment, but from this discussion on the SPDY list:

        Google doesn't seem to think that SCTP gives enough of an advantage over SPDY+TCP. I can almost feel the gears in their heads coming up with something of their own.

        After all there were already solutions similar to SPDY, like BEEP, but Google choose to go on their own instead of working to improve those.

        1. Michael 47

          This is a title

          Sorry, no, wasn't being serious. Should have ended that </sarc> ^_^

  7. This post has been deleted by a moderator

  8. Anonymous Coward
    Anonymous Coward


    Can we stop bigging them up please?

    They claim to be "Performance experts" but their site is absolutely atrocious on every benchmark I could think of to run. They don't even have gzip on ffs, or have combined minified css/js.

    1. Anonymous Coward
      Anonymous Coward


      They could also fix their server to return the correct file types:

      1307121630: Resource interpreted as stylesheet but transferred with MIME type text/html.

      1307121630: Resource interpreted as script but transferred with MIME type text/html.

      search.json: Resource interpreted as script but transferred with MIME type application/json.

      Seems the "experts" are just the ones drinking the cocktails.

  9. Hayden Clark Silver badge


    ... is it really worth it?

  10. Niall

    Dig down - maybe SCTP?

  11. JeffyPooh

    Puh. Time stamp the files in the local cache.

    Time stamp the files/frames in the local cache using a heirarchical approach. Then starting at the top, compare the time stamps. This can even be rolled out to the nodes along the network to act as a whole-web accelerator for all the most popular webpages. Implemented correctly, this technique would also eliminate stale pages (because the time stamps would always be compared without assumptions).

    The present architecture loads and reloads and reloads again and again and again. Tedious.

    1. Anonymous Coward
      Anonymous Coward

      Future internet architecture

      Check out - every router is a cache. Popular websites get much much faster. Anyone can publish popular content.

    2. Anonymous Coward
      Anonymous Coward


      How does that work for HTTPS or did you miss the "to accelerate almost all SSL web traffic " bit of the article?

      Also I'll take slower loading web pages instead of anything that breaks the end to end principle of the Internet even more than NAT.

      I can see that idea failing miserably either by technical fault or human factors.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2021