back to article HTTP 2.0 interop tests slated for August

Fresh from seeing its SPDY protocol adopted into the IETF's HTTP 2.0 draft, Google is pressing on with its quest for faster Interwebs with an experimental protocol called QUIC (Quick UDP Internet Connections). HTTP 2.0 (draft here) proposes a number of changes compared to HTTP 1.1 to cut latency for Web applications, without …


This topic is closed for new posts.
  1. Destroy All Monsters Silver badge

    Why the Internet clusterfucks

    "More on QUIC"

    Why didn’t you build a whole new protocol, rather than using UDP?

    Middle boxes on the Internet today will generally block traffic unless it is TCP or UDP traffic. Since we couldn’t significantly modify TCP, we had to use UDP. UDP is used today by many game systems, as well as VOIP and streaming video, so its use seems plausible.

    Why does QUIC always require encryption of the entire channel?

    As we learned with SPDY and other protocols, if we don’t encrypt the traffic, then middle boxes are guaranteed to (wittingly, or unwittingly) corrupt the transmissions when they try to “helpfully” filter or “improve” the traffic.

    1. Kevin McMurtrie Silver badge

      Re: Why the Internet clusterfucks

      Another month, another research paper promising to save us from TCP. How refreshing it would be to see a research paper instead showing that new tuning algorithms don't work outside a carefully controlled test. Here are your input parameters: Ack packets, data packets. Here are your output parameters: Data packet rate, data packet size, Ack packets. That's all it comes down to. There's absolutely no way for those input parameters to indicate which output parameter is the bottleneck. You can set up a slow guess-and-check feedback loop but the required solution changes too quickly for that. Unless your OS is many years old, TCP already knows all the tricks that work. UDP is also no cure for handshake latency and TCP sessions. To not have a handshake means that hackers can request that extremely large payloads be delivered to forged addresses. QUIC requires an initial handshake and session, just like a long-running TCP connection, for security.

      Honestly, the best way to speed up web pages is to block advertisers and web bugs. Google is the problem, not the solution.

      1. Khaptain

        Re: Why the Internet clusterfucks

        <quote>Honestly, the best way to speed up web pages is to block advertisers and web bugs. Google is the problem, not the solution.</quote>

        Google itself is not the problem, after all they only exist because people want to advertise their products.

        Rather than say Google is the problem I would be more inclined to say that "Commercialisation" on a whole is the problem. Example: If Google were to collapse tommorrow would they be replaced by another advertising company, of course they would.

        I for one would be quite happy to pay a small fee ( 5 Euros per month) for a non-commercial internet...

      2. Destroy All Monsters Silver badge
        Paris Hilton

        Re: Why the Internet clusterfucks

        > There's absolutely no way for those input parameters to indicate which output parameter is the bottleneck.

        Which is why there are more protocols than TCP or TCP over UDP...

  2. A Non e-mouse Silver badge

    As has been mentioned on various other websites, they've re-implemented SCTP on top of TCP.

    The argument being that most (filtering) devices on the Internet haven't even been upgraded to support IPv6 (a completely new protocol), let alone SCTP (which just runs on top of traditional IPv4, like good ol' TCP & UDP).

    I can understand the argument, but I believe we're in a chicken and egg situation. No commercial company will support SCTP or IPv6 unless it has to. And no one will use these protocols until they can cross the Internet.

    1. Suricou Raven

      The core of the internet generally lets any protocol through - that's how it was designed. It's all those NAT/PAT boxes at the edges that are the problem - the ones used on almost every single company network, and every home with more than one connected device. Yet another problem that IPv6 would solve.

      1. Gerhard Mack

        I doubt it, people have gotten used to those little boxes providing a firewall so IPv6 enabled boxes will likely still just drop anything they don't recognize.

        Even Cisco barely supports SCTP.

      2. Eddy Ito


        That's my main IPv6 hurdle. Every piece of kit I have, even the printers, claim to be IPv6 ready except one and that is the one that my ISP, which is spelled with a v, makes me use to connect to their network. Sure, they will let me buy my own, no discount, but it has to be one of three approved models more or less identical to the unit I have. As you say, the box on the edge. I'd wager the stumbling block is that ISPs don't want to change their kit and have to retrain the service techs and deal with the support calls from customers who want to blame the 'new' protocol for their misconfigured [game, email, armadillo].

  3. Anonymous Coward
    Anonymous Coward


    "This is not HTTP 1.1 compatible, but since it's designed to be handled in browser and server, it should be invisible to users."

    Really?? So, it is not backwards compatible and users will not notice any difference? Well, as with other incompatible web features that are bundles of joy both for web developers and the users alike, the users probably are going to now see web applications display error messages such as "This web site works only with HTTP 2.0. Please upgrade your browser."

    1. Return To Sender


      "This web site works only with HTTP 2.0. Please upgrade your browser."

      Whoa there, calm down. You do realise that HTTP is only the delivery mechanism? Most web site developers never go anywhere near HTTP, in fact I'd be surprised if many of them even realise it exists. So, how will this play out?

      First up, the likes of Apache et al will start including HTTP 2 support in their web servers. Mozilla etc. will start building libraries for it in their browsers. And guess what, there will be a negotiation as part of the initial handshake between client and server as to who supports what, just like there is now for things like HTTP 1.0 / HTTP 1.1 support, compressed data and so on. So, faced with a client only supporting HTTP 1, the server will simply deliver over that, but would take advantage of HTTP 2 if it were present. This approach is far from unique to HTTP, of course

      By the time it gets to the point where a connection fails due to lack of support, it'll likely be for one of two basic reasons; 1) you're using a truly ancient browser / library, or 2) some fuckwit admin has managed to configure his server to only use HTTP 2 before the majority of the world can support it. Either case is easily remedied with a length of 4 x 2 with six-inch nails knocked through it

      1. itzman

        Re: Hmm.


        Another comment from someone who didn't bother to read the RFC before putting fat finger to keyboard.

        Do you know whether your data connection on any given session is compressed or not? No. Neither do you care . If both ends support it, you get it. transparently.

        HTML 2.0 is just a small refinement that will in some cases reduce server and client and network loading. In many cases it won't make a blind bit of difference.

        Mots people will not know if its in operation or not.

        The QUIC thing looks more amusing.

        1. gidi

          Re: Hmm.

          HTML 2.0? Speaking of fat fingers...

      2. Irongut Silver badge

        Re: Hmm.

        If your web developer does not know what HTTP is fire them! That's like an application developer not knowing what a floating point number is.

        1. Destroy All Monsters Silver badge
          Paris Hilton

          Re: Hmm.

          How many application developers really know what a floating point number is, and, more to the point, how it behaves and has to be used??

    2. dogged

      Thanks for posting the 1/16th clued General Public response, AC.

      It's always good for us to be aware of what those with just enough knowledge to be massive fucktards will say.

  4. Pete Spicer

    I have the feeling that some of the people commenting here do not remember HTTP 1.1 first emerging and some of the interesting consequences that came with it for both site operators and browser users.

    Sure, it was a very different time, when most people who had sites were generally more tech-savvy than most of today's site owners (speaking as someone who's done support for off-the-shelf site running software)... but an awful lot of sites had a slightly ugly transition.

    If I remember rightly a lot of it came down to the simple Host header which wasn't required in 1.0, mandatory in 1.1 to allow name-based virtual hosting to work (1.0 assumed one site per IP address, something that clearly doesn't work in a shared hosting environment) but that broke a lot of things in the middle, especially proxy servers. Sure, now, it's not a problem because most things are using 1.1 but it's interesting to note that there's still a fairly large undercurrent of things not implementing 1.1 for various reasons.

    I haven't read the 2.0 spec, mostly because these things seem to change almost like the wind (like some of the stuff in HTML5 at times) so once it's moved on a bit I'll check it out and see if the firewalling systems I work with will need any changes (given that they do a little behavioural profiling based on what a given set of HTTP headers they see containing... there's going to be changes needed), but it's probably going to be 2016 before I really have to worry about any of that.

  5. Anonymous Coward
    Anonymous Coward

    I've got an easy answer

    The obvious answer to resource contention is to make the resource more expensive. Charge users by the byte. Then, watch as the market demands an end to the endless stream of crap that is pushed their way in every HTTP connection that purports to be a web page. Don't believe me? Look how lean the mobile version of your favorite web site is.

  6. Anonymous Coward
    Anonymous Coward

    I don't freaking care.

    HTTP 1.0 , 1.1 or 2.0. or whatever.

    I am an end user btw.

    I want my Youtube streaming NOW, I don't care if 2 hamsters at freetube headquarters have to be woken up to get my streaming done. If it speeds the stuff up, great!

    That's how compression got in the game, and most sensible sites use it. That's how JPEG and MP3 came to be before that, fer crying out loud. Up to this day I get pissed off whenever I see a Flash-only site that I can't see or is utterly broken in my tablet / phone.

    Up to this day, Google home page is the lightest thing you can get. Even better, it reaches ZERO bytes if you don't have to load it at all to use the search engine, and that's their greatest trick.

    If people focus on delivering the content that matters in a very efficient manner, it won't make a difference the means to achieve it. If HTTP 2.0 is a step in that direction, keep at it.

    By the way, updating browser is now a moot point by the Chocolate Factory. My Chrome is clocking the version 27.0.1453.116 m and is updating itself right now.

This topic is closed for new posts.

Biting the hand that feeds IT © 1998–2022