back to article TCP alternative QUIC reaches IETF's Standards Track after eight years of evolution

Quick UDP Internet Connections (QUIC) have graduated to Internet Engineering Task Force’s standards track. The QUIC spec, aka RFC 9000, appeared on May 27th, marking the end of the beginning for a story that started in 2013 when Google revealed it was playing with QUIC, which it then described as "an early-stage network …

  1. Phil O'Sophical Silver badge

    QUIC’s best trick is to allow a client and server to send data, even if they have never connected.

    Just like X.25's "fast select" from the 1970s?

    1. b0llchit Silver badge
      Coat

      Everything old is new again. Each generation reinvents the same parts of the same wheel using new vocabulary and a modern color. Next up we'd have computers talking to each other as a new fashion thing.

      "Human: Computer, where is my computer? Computer: You have me in your pocket."

      1. Anonymous Coward
        Anonymous Coward

        12 year loop

        Several years ago I wrote a short piece on an observed feature in the industry I was working in (oil&gas exploration). It was prompted by a presentation at a big conference showing a new way of improving project management - a technique I’d been using some 15 years earlier.

        I noticed that, in many of the major project management organisations, few people stayed in a post for more than 5 years, many no more than 3. I has also been able to observe staff hangovers in a hospital ward for a couple of years (a family member with a long term stay - now out and about). There were 3 shifts in the ward and I noticed a number of times where the handover meetings missed something (fortunately not life-threatening, but still a concern). I compared this to the offshore regime where there are only 2 shifts. In the latter, you are usually handing over to the person who handed over to you; this didn’t happen in the former, and missed items (fortunately very few) were in the second handover. I put this down to the fact that the second shift, whilst aware of something, didn’t own the issue and wouldn’t put the same importance on it; consequently, they wouldn’t put the same emphasis on it. The saving grace was that most points didn’t become significant until it got to the third shift (who were already aware of it).

        Now apply this to the corporate situation where a problem arises and is solved. The situation (problem and solution) is passed onto the next generation but, when it comes to the one after that, the reason for the solution is lost and the problem comes back. After all, it often came about because it was the easier way and the solution needed a bit extra.

        Taking an average of 4 years in post, we shouldn’t be surprised when wheels start getting reinvented after 12 years…. I called it by a made-up word: hystery.

        1. AlanSh

          Re: 12 year loop

          Staff hangovers? Love it.

        2. a_yank_lurker

          Re: 12 year loop

          The tendency of companies not to keep lifers around also compounds this. Often no one knows the full history of something which causes needless 'reinvention' as the current staff tries to figure out way something was done X years ago.

          1. Lorribot

            Re: 12 year loop

            Lifers not being kept is a thing I see more in metropolitan based companies or at least for those staff based at metropolitan offices as they often have a bigger choice of employers and better advancement options, those based out in the sticksare less inclined to move on as there often only one or two companies that pay the rates required or have large enough estates to be interesting, and moving companies can often mean moving house or looking at a long commute. In my own company, my own 20 years there is small fry compared to some, 30-35 is not unknown, none of whom are based in the major city offices.

            Cities are a great place to locate if you want a large pool of employees to choose from.

            Cities are a great place to locate if you want a large pool on employers to choose from.

            It works both ways.

        3. Anonymous Coward
          Anonymous Coward

          Re: 12 year loop

          " I called it by a made-up word: hystery."

          Someone once said that religious movements follow that same trajectory. The founder had insights - the next generation of acolytes had learned from the founder - from then on everything was done by blind faith and rote.

      2. spireite Silver badge
        Flame

        Like many oldies in the industry, now in my 4th decade, I sit at my desk and look at all the new buzzwords flying round, and 'new' concepts and think......

        WTF are they talking about? It's not new, it's a reinvention of the wheel from before that is different in one of the following ways.....

        0. It's round, has 5 spokes....

        1. It's round, it has ten spokes...

        2. It's square, has 8 spokes....

        No such thing as new in this game... it's constant rehash, new shiny coat, s**t underneath, with the utopian promise that it's faster and cheaper, but after 6 months in real worlds usage....... isn't

    2. Anonymous Coward
      Anonymous Coward

      >Just like X.25's "fast select" from the 1970s?

      It's really a bit more like UDP, from the 1980s. Which is why it's in the name.

  2. bombastic bob Silver badge
    Meh

    networking boffins rated QUIC as more vulnerable to web fingerprinting than HTTPS

    I am also concerned about packet injection, impersonation, and other such security issues

    TCP at least TRIES to make packet injection and impersonation difficult.

    1. Richard 12 Silver badge

      Re: networking boffins rated QUIC as more vulnerable to web fingerprinting than HTTPS

      You should read the actual RFCs. It's clear you haven't understood any of it.

      TCP does exactly nothing to prevent packet injection. Anyone in the route can swap a few packets if they feel like it, and there's no way for the other end to detect it. An application can only defend against those attacks by layering security on top of the stream - usually TLS.

      QUIC requires TLS.

      The only way to spoof QUIC packets is to break the encryption or poison the certificate chain. Not impossible of course, but no less difficult than breaking HTTPS.

      1. Charles 9

        Re: networking boffins rated QUIC as more vulnerable to web fingerprinting than HTTPS

        I think the concern is that, by breaking up the one big request into multiple smaller ones, as QUIC does, the pieces can be fingerprinted by address and size, which encryption cannot conceal.

        1. Jellied Eel Silver badge

          Re: Ready salted packets

          ...the pieces can be fingerprinted by address and size, which encryption cannot conceal.

          There's a solution for that, ie salting traffic to make traffic analysis harder. But then the point is to minimise competitors fingerprinting efforts. It also kinda misses the point of UDP vs TCP. One offers some form of reliable networking, the other doesn't. But if you're billing by network usage, converting TCP sessions into QUIC increase traffic, especially if there's congestion, packet loss, and data has to be retransmitted.

          Whether that makes a session 'faster' is debateable, especially if it's depending on applications to notice packet loss than network devices. There's also a few other potential inverse efficiency snags if sessions are broken into multiple streams, and 'goodput' reduced with more headers than payload. Plus extra fun for silicon and buffer tuning, if 1 session becomes 10 small-packet 'spray & pray' communications.

          1. Charles 9

            Re: Ready salted packets

            "It also kinda misses the point of UDP vs TCP. One offers some form of reliable networking, the other doesn't. "

            Counterpoint: TCP is layer 4 and can't ensure reliability if layer 1 (the physical layer) is unreliable, and wireless networks are usually less reliable.Plus a single connection usually means a single thread, reducing parallelism potential.

            "Whether that makes a session 'faster' is debateable, especially if it's depending on applications to notice packet loss than network devices."

            Consider that back when the protocols were first deployed, local parallelism wasn't exactly en vogue. Now, most applications are expected to be multithreaded and multitasking, able to prioritize and know what's important and what's not. Otherwise, you're going to get into a debate over which has a better idea of how to prioritize: the application layer or the protocol layer?

            1. Anonymous Coward
              Anonymous Coward

              Re: Ready salted packets

              Please go read up on TCP, especially 'retransmission' and "sequence checking", etc. You really are mistaken about reliability and the guards to assure that.

              1. Jellied Eel Silver badge

                Re: Ready salted packets

                Yup. I can kinda see the point to QUIC inside datacentres or in supercomputer environments where parallelism can be handy. On congested public networks, far less so. Or even in congested private/virtually public. But that's always been one of those challenges with the IP suite compared to alternative protocols.

            2. Anonymous Coward
              Anonymous Coward

              Re: Ready salted packets

              "TCP is layer 4 and can't ensure reliability [...]"

              The word "reliability" for TCP means that you know whether your transmission reached its destination or failed en route. It doesn't mean it is guaranteed to get through if there is no path.

              1. Yes Me Silver badge

                Re: Ready salted packets

                Right. As best it can, TCP creates reliability over an unreliable path, although there is always a residual probability of failure. But if you want transactional integrity, you have to add a 2-phase commit on top of TCP (or whatever other transport protocol you use).

                1. Charles 9

                  Re: Ready salted packets

                  There's also the issue that not all parts of the Web need the same level of attention. The baseline HTML, sure, but background images? Many other images? A lot of it is nice but not essential, so UDP should be OK for them; if they get lost, oh well.

                  1. Roland6 Silver badge
                    Pint

                    Re: Ready salted packets

                    >but background images? Many other images? A lot of it is nice but not essential, so UDP should be OK for them; if they get lost, oh well.

                    Those are the most important parts that must not be lost - think of the lost ad revenue!

                    I hope with QUIC webpages will be able to load without having to wait for all the bloat that turns 3k of content into 70MB from 50+ sources.

              2. EnviableOne

                Re: Ready salted packets

                TCP is reliable, Windowing, re-tries, acks and frags

                if your transition fails en route it "tries again" and again

                gradually reducing the window until it reliably gets data from A-B

              3. Richard 12 Silver badge

                Re: Ready salted packets

                QUIC, like TCP and many other protocols, takes an unreliable way of sending packets (ethernet frames, UDP packets) and creates reliability over the top.

                There are a lot of different ways to do that.

                TCP has a lot of known problems when running over congested links, latency rises and throughput falls exponentially. That can easily end up with everything waiting for a favicon to get downloaded before any of the actual useful content, mainly due to the current insane way web pages are assembled...

                QUIC is supposed to be "better", though I'm not yet sure what its pitfalls are.

                I've used several other reliable-over-UDP protocols that fix TCP's major problems and replace them with their own new and exciting major problems.

            3. naive

              Re: Ready salted packets

              Being a sliding window protocol, TCP is extremely reliable, contrary to UDP which is sort of "fire and forget", leaving it to the application to implement reliability if it needs it.

  3. Anonymous Coward
    Anonymous Coward

    “more vulnerable to web fingerprinting”

    Sounds like an intended feature given Google are involved. Cloudflare are also perfectly placed for hoovering up this info.

    1. Cem Ayin

      For details see here:

      https://svs.informatik.uni-hamburg.de/publications/2018/2018-10-11-Sy-PETCON-Tracking_via_the_QUIC_Protocol.pdf

  4. Dan 55 Silver badge
    Devil

    Google pushing the web in the direction they want and making it a de facto standard in Chrome?

    This is shocking news. Such a thing has never been heard of before. Who do I complain to? Oh, I can't because Google's support is non-existent and the W3C are powerless against Google.

    Still, the good news is they'll get bored after a year and drop it after messing everyone else around.

    1. Steve Davies 3 Silver badge
      Mushroom

      Re: Google pushing the web in the direction they want and making it a de facto standard in Chrome?

      How long before we are all required to stop calling it 'The Internet' and have to call it

      "GoogleNet"

      All your messages are owned by the Borg aka Google.

      {eat this Google} See Icon

  5. Anonymous Coward
    Boffin

    If it gets rid of . . .

    If it gets rid of TCP, I will be pleased.

    And if Microsoft's open source version, MsQuic, can supplant Google's version, I will applaud.

    Giving kudos to Microsoft hurts my brain but lately they seem to be earning them. At least their naming committee is still stupid. I suppose we should be glad that they didn't name it Quicky.

  6. Totally not a Cylon
    Headmaster

    Another way of speeding up the internet

    Get rid of javascript and all 'load resource from a totally different domain' rubbish.

    I remember when we used to fine tune the raw html to get page load times down. Now it seems nobody cares and they just load routines at random without caring what performance/security holes they create, just to make it 'cool-looking'.

    1. Charles 9

      Re: Another way of speeding up the internet

      Sometimes, you just can't have nice things. If 'cool-looking' gets all the clicks, what else can you do?

      1. stiine Silver badge
        Mushroom

        Re: Another way of speeding up the internet

        Send them one of these -->

  7. Anonymous Coward
    Anonymous Coward

    Quic

    is a protocol highly optimised for web use, and that's fine.

    TCP is a general purpose 'reliable' protocol that has been around for about 45 years and still works pretty well.

    Can you make a new protocol better in some circumstances? Absolutely. Will Quic become the backbone of the internet for the next 45 years? No. And personally I'm not a big fan of designing a data exchange protocol around a set of user-space application requirements, because we'll all have shortly moved on.

    1. matjaggard

      Re: Quic

      Moved on to what? I think it's fair to say that delivering content like webpages to browsers will be around for quite some time yet. Yes the internet is changing at a crazy pace but I think this is dealing with a problem that will be relevant for ages.

      Other options: TOR? BitTorrent?

      I don't think so

    2. Graham Cobb Silver badge

      Re: Quic

      I agree with your point about TCP being general purpose - although it has been heavily optimised for two (conflicting) use cases (one way transfer of bulk data as quickly and efficiently as possible, and interactive "telnet/ssh" use - at least in the days when I was involved).

      But given the importance of web traffic it is reasonable to think that TCP may not be able to be really optimised for that and a new transport protocol could be more efficient.

      There are other transport layer protocols designed to optimise niche applications (such as SCTP for SS7). One for the web seems likely to have quite a long lifetime.

  8. Kevin McMurtrie Silver badge

    Standard packets

    One way to break fingerprinting would be using standardized libraries that obscure packet rhythms. It could assign some small random amount of buffering latency to each end-to-end route that is used to maximize packet size. This buffering would make the traffic between different sites more similar, hopefully with a smaller variance than the random buffering latency assigned to the route.

    No doubt that Google's solution will be to flood apps and servers with their own free QUIC implementation that maximizes fingerprinting. If it really is a layer on UDP, apps could implement QUIC themselves in a snooping-friendly manner rather than rely on a more secure implementation in the OS.

  9. Anonymous Coward
    Paris Hilton

    “Internet-grooming company Cloudflare”

    I’m guessing that Savile-esque turn of phrase didn’t come from their marketing division.

    “Ows about that! Can you see where my packet is…? Uhh huh uhh huh!”

  10. iced.lemonade

    IPv6

    personally i feel migration to IPv6 or anything that solves the shortage of IPv4 deserves more attention than this QUIC which seems to be a nice-to-have rather than crucial.

    1. Roland6 Silver badge

      Re: IPv6

      QUIC isn't dependent upon IPv6, so given the momentum behind it, it is likely to become a very real part of your everyday Internet experience (if it isn't already) much sooner than IPv6...

  11. sreynolds

    Shame that they used TLS...

    TLS has a lot of baggage. TCP was designed in the days when buffers were small and serial links were slow and memory was limited. Furthermore congestion managment didn't take into account the time, as Codel does. The real shame is that there is no version based on the Noise protocol

    1. EnviableOne

      Re: Shame that they used TLS...

      with 1.3 ratified and its optimisations, the baggage is a lot less ....

  12. Usage Lapsed

    Really? It will do all that?

    I’m sorry, I just don’t get this.

    For years all I’ve seen is how a tweak to the connection part of the data is the holy grail for speed and amount of data used; it will solve all issues… this is just smoke and mirrors.

    In reality, we have to wait for multiple dns lookups for ad networks, then all the auctions to complete before advertiser then decides which to serve, then of course the data transfer for the ad, often before the page will render fully.

    When you only have 1 bar of signal and trying to get an address or so because you are lost, then what would really speed it up is dropping all the crap.

    1. Charles 9

      Re: Really? It will do all that?

      To which they respond, "Price of admission. Or would you rather PAY for the privilege?"

  13. foxyshadis

    Firefox's implementation of HTTP/3 with QUIC is going live this week too, so that's another point that'll drive adoption. I've been using it for a year solid, and sporadically before that, and when it works, it works great. (When it doesn't, it takes extra refreshing and it's really annoying. Twitter, for instance, has a terrible HTTP/3 server.)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon