back to article Don't rush to adopt QUIC – it's a slog to make it faster than TCP

Quick UDP Internet Connections (QUIC), the alternative to Transmission Control Protocol advanced as a fine way to speed up web traffic, struggles to deliver that outcome without considerable customisation. So write Alexander Yu and Theophilus A. Benson of Brown University in a paper [PDF] titled "Dissecting Performance of …

  1. storner
    Boffin

    Patience, my dear

    TCP has evolved over some 40-50 years. I suppose QUIC will eventually deliver on their performance promises, but sure isn't going to be a simple quic-fix ;-)

    1. Wellyboot Silver badge

      Re: Patience, my dear

      UDP has been around just as long.

      Isn't QUIC just UDP with knobs on?

      1. Pascal Monett Silver badge

        It's been around just as long, but it has only just started to be tweaked like that.

        At least, I think so.

      2. Peter Gathercole Silver badge

        Re: Patience, my dear

        Well. Just reading up on this, the first thing it does is move flow control and error correction into userland.

        This means to use it, the application will have to understand QUIC rather than relying on the OS to do the flow control, packet loss processing etc. So this could mean that some applications you use work f**king fantastically, whilst others are terrible.

        Secondly, it collapses much of the security setup such as TLS into the initial QUIC handshake, making it a much less layered system than the same thing broken into phases on TCP. The same could be done with TCP, but of course UDP has much lower latency than TCP, because of the elimination of the session negotiation.

        Thirdly, it includes multiplexed sub-channels, each of which are corrected by the application. This prevents what is called head-of-line blocking, where one transmission error can cause other multiplexed channels within the same TCP session to be blocked until the re-transmission is successful. Sliding windows on TCP can reduce the effect of this, but I've seen this blocking happen on unreliable connections with TCP, especially if you get double or triple retransmission attempts.

        Of course, with TCP you could open multiple independent TCP sessions to do the same sort of thing, but that means that you have the multiple setup and breakdown overhead of each session to consider.

        1. Arthur the cat Silver badge

          Re: Patience, my dear

          SCTP was supposed to address the third point, but AFAICS it never gained general acceptance.

        2. teknopaul

          Re: Patience, my dear

          I view quic like http2. It makes a small difference to Google and Facebook who have huge amounts traffic and thus comparatively all engineering effort is insignificant.

          They can convert that to a competitive advantage. No one else can.

          Especially Google that have a vertical with their browser that should never have been allowed.

          Simplicity is a good thing.

          It is only in these huge tech giants interest that complicated tech is 20% faster.

          In everyone else's interest is simplicity.

          Where I work we have 4 of everything. Two for failover and two more in the backup datacenter. 20% benefit will never apply to anyone with less that five of everything in the PSC.

          I can get 250,000 cc off a single Linux instance using tcp and a standard stack. I doubt I will ever work for any project that needs six of those.

          The notion that we all benefit from these advancements is provably incorrect.

      3. David Taylor 1

        Re: Patience, my dear

        The way I view QUIC is that it's just a custom TCP stack built on UDP.

        With control over both endpoints (e.g. Google, Chrome and YouTube.com or whatever) you can make bolder changes to congestion control etc. without worrying about compatibility.

        Without such control, it's less appealing (although if there's some open source version with useful tuning for your circumstances it could still be useful).

        The "massive implementation overhead" is basically *why* QUIC exists -- to let Google play around with reimplementing TCP to their own tastes.

        1. Will Godfrey Silver badge
          Meh

          Re: Patience, my dear

          This is quite new to me, but not entirely surprised Google is involved. Just looks like a work and blame shifting exercise to me. Maybe we should all just use UDP and write our own protocols...

          Just kidding - honest.

          1. teknopaul

            Re: Patience, my dear

            Seriously Google could do that.

            They own the serverside the browser and the network in most cases.

            Bit they don't want to do that. They want to make everyone else have a _disadvantage_.

            Hence quic.

            A balkanised Internet is not far away.

        2. BOFH in Training

          Re: Patience, my dear

          If you need control over both end points to make sure it works the best, maybe QUIC is better for links within the DC or inter cluster, when you have total control over everything, including the end points. You can still use TCP for public connections coming in from outside to your front end systems, with QUIC deployed for your backend systems feeding the front end.

          1. David Taylor 1

            Re: Patience, my dear

            If you have total control over *everything*, just size the links appropriately so there is no significant congestion.

            It works for Google because they control the browser and (their) servers, but not the network in between.

            For everyone else? Well, they don't get any extra control by using Google's protocol, but if it happens to work better than TCP for some use-case, why not?

            1. JetSetJim

              Re: Patience, my dear

              > just size the links appropriately so there is no significant congestion.

              If the links are dynamically varying their properties that gets quite hard, I imagine. QUIC is also being looked at in the mobile space, and RF conditions change rapidly as well as load influencing bandwidths available to the user

      4. bombastic bob Silver badge
        Devil

        Re: Patience, my dear

        back in the late 2000's I was involved in coming up with a UDP method of getting perfect streaming video from existing RTP server setups. There were a few hurdles but in general we had it working at least enough to impress TV network providers. Ultimately they picked a TCP based solution that had hardware extensions to increase reliability through buffering, etc. (the opposite approach).

        My solution was better: like a wifi connection, or a zmodem transfer, it simply detected the missing packets and asked for them to be re-sent, re-assembling things that were out of order back into the correct order up to a maximum time window (beyond which you would get video skippage, not just noise).

        Seriously I have to wonder if QUIC manages a constant uninterrupted stream with occasional "send it again" requests (in which you re-assemble things into the correct order without holding up the stream).

        yeah maybe they ARE doing things in that semi-obvious fashion, and as I'd just heard of this QUIC protocol I haven't had time to look at it.

        1. Richard 12 Silver badge

          Re: Patience, my dear

          Basically, yes.

          As does TCP.

          The key features of QUIC are:

          1) Security negotiation is part of the standard, not bolted-on afterwards. So setting up a secured link is much faster.

          2) Multiple streams by default, so you don't end up waiting for now-irrelevant data (head-of-line blocking).

          The latter is of course easily done by opening multiple TCP streams, however each of TCP stream requires its own negotiation security and consumes a port.

          Modern web pages might refer to hundreds of different resources, so being able to request multiple of them all at once over a single QUIC link and let the server deliver them in any order is valuable.

          However, if the page resources come from many different servers, or there are only a small number then it makes no real difference.

          1. Roland6 Silver badge

            Re: Patience, my dear

            >Modern web pages might refer to hundreds of different resources

            Most of which add zero value to the user...

        2. sabroni Silver badge
          Coat

          Re: My solution was better

          Of course it was, yours RANDOMLY CAPITALIZED text!!

      5. DufDaf

        Re: Patience, my dear

        The knobs are important.

  2. sitta_europea Silver badge

    It's taking us long enough to get a handle on where TCP breaks - or can be broken - and we're still working it.

    I don't believe that throwing another lump of scented soap into the bathwater will help this baby at all.

  3. Anonymous Coward
    Gimp

    QUIC - meh

    No matter how fast the protocol, it still has to shift the gigantic wankery that is a modern website.

    Have a look at your dev tools and see if the protocol makes much real world difference at all compared to the overhead of the vast wodges of JS lumbering along the tracks.

    1. teknopaul

      Re: QUIC - meh

      This is the point. Give 100% more engineering resource, 99% of companies would be better of tuning web pages.

      Quic with 20% gain (say) is only useful if you already _don't_ have a 21% gain to be made optimising your web pages.

      Google should go back to squeezing another 0.5% of their own web pages and stop breaking the Internet.

      They are not making the Internet faster. They are finding interesting ways to abuse a vertical. And being allowed to by a weak US government.

      Did you/your company benefit from http2 bearing in mind that your competitors got it at the same time and Google/Facebook got it before you?

      Seriously interested to hear if you did.

  4. Brewster's Angle Grinder Silver badge
    Coat

    "...the paper ends."

    Handy to know. If it didn't end, I wouldn't have enough paper to print it!

    1. Brewster's Angle Grinder Silver badge

      Or enough time to read it...

  5. This post has been deleted by its author

    1. teknopaul

      talk

      The alternative approach to ossification is communicating with existing users.

      Google's handling of http proxies for Web sockets is a classic example.

      http was a text based protocol with obvious simplicity advantages.

      Google added a completely mad binary complexity to the protocol instead of working with the community just so they could hack thier way through proxies.

      Resulting in a mess.

      Upgrade: socket

      would suffice, provided that you are willing to communicate with proxy developers.

      It's trivial to develop for any proxy that supports https the code exists.

      Google have shown little tolerance for the existence of proxies because their own internal network did not use them.

      The excuses they use for the complexity added are nonsense. Literally make no sense.

      1. This post has been deleted by its author

        1. Roland6 Silver badge

          Re: talk

          That article was written in Dec 2017 and the checker tool seems to no longer be available, however a quick Google did give me: https://www.cdn77.com/tls-test

          Given the problem being reported was wth some TLS 1.0 servers, it would be interesting to see the results from running the tests today.

  6. Kevin McMurtrie Silver badge

    The feedback the same

    Packet loss and latency. Those are the two feedback values you have for tuning. Packet loss is a costly value to probe. Both values have highly dynamic optimal values.

    TCP, QUIC, and home-brew UDP layers can't improve what little data there is to work with.

    Fixing bloated JS and giving your marketing department rabies shots could improve HTTP performance by 90x.

    1. Pirate Dave Silver badge
      Pint

      Re: The feedback the same

      "giving your marketing department rabies shots"

      That's got to be Quote Of The Week. Thank you, sir.

  7. Fazal Majid

    Tragedy of the Commons

    QUIC is designed so the app decides what congestion control algorithm to use, not the OS networking stack. Most use the BBR algorithm, which is very aggressive and will steal more than its fair share of bandwidth in the event of congestion compared to TCP with CUBIC or older implementations. This creates a situation where greedy browsers using BBR have apparent better performance, and everyone starts switching over until the well-behaved clients are so penalized they become unusable when there is congestion. Of course, TCP can also run BBR, and outperforms QUIC, but that requires OS upgrades, which don't happen as frequently as browser upgrades.

  8. martinusher Silver badge

    TCP is wrong for most network transactions

    TCP emulates a full duplex serial connection which is why its proved so popular and enduring. Its really easy for application programmers to use. Its also grossly inefficient of both system and network resources++, its not very reliable** and it invariably requires programmers to implement ad-hoc framing and messaging protocols on top of it. Web programming, for example, uses the same messaging codes that FTP does (those three digit codes you see at the beginning of the frames. Its a mess and is long past the time when it should have been rationalized.

    ++A TCP connection requires at a minimum a couple of timers and an extra thread. If the socket is likely to drop then it needs a secondary process to monitor the connection to detect the dropped socket and silently reconnect. The protocol itself, being truly full duplex, needs coordination between the send and receive sides to minimize the sending of acknowledges (timers, i.e. delay). ACKs on protocols that already have ACKs (WiFi....) just leads to congestion par-excellence (but I suppose people will just tweak the protocols to improve this, usually by adding yet more timers to delay things.)

    **TCP is not very reliable for long duration connections because of the problem of detecting a dropped socket and silently reconnecting. Its also byte (octet) oriented but many programmers don't realize this, they assume that sending a block of data into a socket will result in the same block of data turning up at a listening socket. It does a lot of the time but its a side effect, its not a property of the protocol, so when a block gets -- legally -- fragmented their home-made protocols fall apart due to a lack of framing.

    1. jezza99

      Re: TCP is wrong for most network transactions

      The fact is though that both enterprise network equipment and modern kernels are massively optimised for TCP.

      In a LAN environment, NFS was originally written on UDP. Some time later, NFS over TCP was defined, but the TCP overhead made it slower. However, for at least the last 10 years storage vendors have strongly recommended NFS over TCP for performance. The difference is kernel support on both server and client.

      In the WAN, if you control both end points there are devices which will optimise TCP to radically increase performance, even with high latency. This means that you can use standard applications such as SFTP to transfer data efficiently between continents. As these devices work by managing error correction it is hard to see how they would work if that were done at the application layer.

      I can see the advantage of including encryption as a tier 1 protocol feature though. If TCP were designed today it would surely have that.

    2. doublelayer Silver badge

      Re: TCP is wrong for most network transactions

      "it invariably requires programmers to implement ad-hoc framing and messaging protocols on top of it. Web programming, for example, uses the same messaging codes that FTP does (those three digit codes you see at the beginning of the frames."

      So does everything. UDP requires them to packetize everything, while TCP requires them to serialize everything. In each case, it's one or more strings of bytes. The only way for an application not to have to implement their own communication system above that is if the network layer implements lots of subtypes which it can transfer on its own. That's not very efficient--most programs' internal data will take the form of structures or classes which the transport layer certainly won't already know about.

      "++A TCP connection requires at a minimum a couple of timers and an extra thread. If the socket is likely to drop then it needs a secondary process to monitor the connection to detect the dropped socket and silently reconnect."

      It doesn't require those things. A single timer and no thread can work too because the process reading from the socket can do that checking. Extra threads are not required for recovering from a dropped socket. More importantly, most programs don't have threads in place to silently recover from dropped sockets because that may involve recovery on the process side as well. It is not automatically the case that if your socket is not working that you should open a new one and slot it in. Many protocols over TCP will want the side which reconnects to identify itself again, provide information on the last functioning state, etc.

      "ACKs on protocols that already have ACKs (WiFi....)": No, that's two different systems with ACKs. Each serves a different purpose. The WiFi AP could I suppose do the TCP acknowledging for the user, but that breaks compatibility with wired networks which wouldn't bother with that. Implementing it on the wired networks, on the other hand, would require more processing in switches or modems to figure out which of them is supposed to be intercepting the user's stream in order to acknowledge it.

      "TCP is not very reliable for long duration connections because of the problem of detecting a dropped socket and silently reconnecting.": Compared to alternatives, it's not that bad. If you want something that will stay connected for a very long time and you don't want it to drop, just arrange with the other side to send polls to one another from time to time. A single poll fail indicates that you need to reconnect. That works as well with TCP as it does with UDP.

      1. Brewster's Angle Grinder Silver badge

        Re: TCP is wrong for most network transactions

        "If you want something that will stay connected for a very long time and you don't want it to drop, just arrange with the other side to send polls to one another from time to time. "

        Or set SO_KEEPALIVE.

        Aside: and if you think things are bad now. Go back 25 years and discover why "pipelineing" is an optional extension in rather too many protocols.

    3. Anonymous Coward
      Anonymous Coward

      Re: TCP is wrong for most network transactions

      " it invariably requires programmers to implement ad-hoc framing and messaging protocols on top of it."

      Wasn't that the entire point of the OSI stack?

      1. Michael Wojcik Silver badge

        Re: TCP is wrong for most network transactions

        Not really. The original point of the OSI stack was to be descriptive. Actual implementations were a secondary concern.

        But in any case the claim you quoted is wrong, for the obvious reason that a great many programmers will use existing implementations of higher-level protocols which provide framing and messaging. Most programmers working on distributed systems are writing web applications – most commonly in Javascript – which use existing HTTP implementations. Someone writing XHR requests for an RIA / SPA (or, more likely, writing to a Javascript framework which hides XHR under more layers of abstraction) is most certainly not worrying about framing and messaging of those requests and their responses.

        So "invariably" is a load of rubbish.

        There are those of us who do work with protocols at a lower level and have to worry about implementing things like message reassembly, but that's relatively rare, and should be rarer. I'd say 90% of the questions I see about sockets and other lower-level communications APIs online should be answered with "you're doing it wrong – use a higher level".

    4. Roland6 Silver badge

      Re: TCP is wrong for most network transactions

      >and it invariably requires programmers to implement ad-hoc framing and messaging protocols on top of it.

      That's not really a transport layer problem, remember OSI had Session and Presentation layers that added additional functionality (*). QUIC does include more Session functionality, but given how much it still pushes upwards into the application, it probably doesn't go far enough.

      (*) Although I doubt it contained all of the Session and Presentation functionality we are now requiring. So not a case for promoting OSI as defined in 1988 as the solution to the problems QUIC is attempting to address.

  9. DufDaf

    Configuration needs tweaking...

    The default settings you get when installing it does not help, and you typically need to make sure you have configure 0-rtt and that your congestion window size is set correctly, not a lot but essential, and for that you remove almost all the overhead from TLS/HTTPS rountips for setting up encryption, and that you don't need to pause and wait for congestion protocols in the middle of an average size document you sen/receive.

    So why not use UDP - UDP have no relaibility, the QUIC protocol adds that without additional coding, just like TCP does.

    In short, you can easily win 200-600ms while having zero downside from lossy UDP.

    Source: I did it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Configuration needs tweaking...

      So why does it come with such bad defaults?

  10. Anonymous Coward
    Anonymous Coward

    So basically they are trying to say that QUIC is SLOW.

  11. The Mole

    "the authors point out that their work shows it has "inherent advantages over TCP in terms of its latency, versatility, and application-layer simplicity".

    That's pretty much the exact targets of QUICs performance advantages. In most network conditions with similarly configured congestion control algorithms TCP and QUIC will be capable of the same throughput - which is pretty obvious as it is the congestion control algorithms that manage the rate that packets flow so the only differences there are protocol overheads.

    Latency is a really important factor in web browsing. The browser has to download a page, parse it, work out the links and then request those objects. Typically these objects are small and it is the round trip times and handshaking that starts to dominate. If particularly if you are connecting to other HTTPS servers the negotiation phase can be expensive. QUIC is designed to remove those round trips during the handshake and start delivering data quicker. Other features like parallel streams and push support also help with latency reduction. Pushing means the server can deliver the main webpage and then immediately start delivering the associated assets without waiting for the client to request them. Streams means the client can ask for a list of files and the server can send them interleaved. If file A takes 3 seconds of processing to be created it can just get on with sending file B and C. In HTTP you can either pipeline which just means you queue up the requests but they will still be delivered sequentially; or create multiple TCP connections which is expensive for both the server and client and due to TCP slow start it takes time before each connection can get maximum throughput.

    Another beneficial feature of QUIC is the ability to cancel file transfers. In HTTP this isn't possible, if you want to abort you have to close the connection and then re-establish a new one. TCP slow start then kicks in where it takes time for the network stacks to calculate the optimal window size, initially the transmission sizes are limited.

    To use it to its best this does in particular mean you need cleverer servers that implement and exploit all the relevant features. The conclusion "QUIC does not automatically lead to improvements in network or application performance for many use cases" is not really surprising.

    The second issue with this research is that TCP has been the dominant protocol for decades so a lot of effort has gone into optimizing it. Even commodity network cards have all sorts of optimizations in them to get the best out of TCP and offload work from the CPU (calculating checksums, packet defragmentation, etc), linux has support serving http(s) directly from the kernel, and TLS offloading or even serving directly from the NIC is possible on more expensive chipsets. Historically UDP has been a second class citizen relegated to taking the slowpath rather than optimized TCP pipelines.

    Effectively the comparison is between a highly optimized internal combustion engine with an electric milk float. In heavy traffic the milk float and petrol car are going to get the same performance (the congestion control techniques of the roads is the limiting factor). Over the last decade electric cars have got better rapidly, and whilst E1 cars still don't quite match F1 in time they surely will. The same can be said of QUIC, as QUIC implementations and hardware are optimized performance will increase significantly.

    Finally QUIC has the advantage that much less of the code is in kernel space, this means servers can theoretically be optimized much easier for their use case - using different congestion control algorithms or other logic based upon if they are regularly serving lots of small files, or few large files. Custom TCP stacks with this flexibility is a much harder proposition.

    1. Christian Berger

      Well if Latency was such a problem with web browsing, why do people...

      a) put images on different domains

      b) load Javascript from different domains

      c) not use HTTP-Request pipe-lining

      d) not use inline resources

      ?

      It seems to me that the "problem" of latency is mostly caused by web designers having no clue what they are doing.

      1. Anonymous Coward
        Anonymous Coward

        "web designers having no clue..."

        Most web designers use tools to create and manage content. They don't care how that content is delivered.

        I've been doing network programming since late 80's off and on including protocol layer for a secure mobile network where everything could be moving including the base station....

        Recently I've put a website together for a small charity. I used WiX to do so as we will not get much traffic and time and money is limited (enough to buy a domain and host on WiX).

        If I needed to worry about traffic and performance with a decent budget thrown in I'd take a different approach. However everyone else at the charity would throw the problem to a web design company who would do nothing other than buy more server space....

      2. Christian Berger

        I have actually just learned that HTTP-Request pipe-lining is actually used... in between the inbound proxy (a.k.a. load balancer) and the web server... which means that if there is any problem in that (e.g. a POST method with a wrong Content-Length field) you will be able to break into other peoples sessions.

  12. Christian Berger

    It's not meant to be faster

    After all TCP/IP is already reasonably quick. The main incentive behind QUIC is to cause complexity. For a company like Google complexity is something very valuable as it keeps competitors away. The basic idea of an Internet, where a single person can implement all relevant protocols in their spare time is a threat to the Googles and Facebooks of the world. For them the goal is to have a closed web, with Facebook providing the authentication, Google the indexing and Cloudflare the actual content distribution. To them you are not supposed to run your own web server.

    That's why they never address the actual problems of the Internet, like the necessary complexity of web standards. Instead of slimming them down to something reasonable, more and more questionable APIs get added. Browsing a website now is a security risk as Javascript malware can exploit the unavoidable security holes inside of highly complex systems like browsers.

    1. Anonymous Coward
      Anonymous Coward

      Re: It's not meant to be faster

      And what's with these "service workers" installed silently when you visit a page?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon