back to article QUIC! IETF sets November deadline for last comments on TCP-killer spawned by Google and Cloudflare

The Internet Engineering Task Force has set November 16th, 2020, as the final date for comment on Quick UDP Internet Connections, the would-be TCP-killer that Google and Cloudflare have offered up as part of HTTP/3. QUIC’s backers point out that TCP is chatty and therefore imposes long round-trip times on traffic. Which is …

  1. Pascal Monett Silver badge

    I don't get it

    "makes it possible for a client and server that have never connected to send data without any round trips between the devices "

    So the server never gets the request from the client but it knows what to send where ? QUIC is using magic divination ?

    There is obviously a notion that I'm missing here, but it seems to me that the protocol used does not prevent the request from needing to get to the server before the server can respond to it and that sounds like a round trip to me. I'm sure they know what they're doing, but my basic understanding of networking is insufficient for me to grasp the intended meaning of those words.

    1. NiceCuppaTea

      Re: I don't get it

      I think they are probably just dropping ACK from TCP with some sort of list of misseed stuff at the end.

      TCP is typically Send Packet <-> ACK Packet

      UDP is Send Packet -> Send Packet -> Send Packet dont care if you receive them.

      had a quick read of the wiki and it seems QUIC processes data in the applicaiotn layer with an applicaiton ID as part of the data packet. With the applicaiton informing the server of anything that didnt make it to the client.

    2. Warm Braw

      Re: I don't get it

      It's a fundamental problem of abstract layered protocols that they can be suboptimal in specific concrete cases.

      OSI networking had 5 different choices of transport layer, partly for this reason and partly to stop the French complaining, but that's a separate issue... If you imagine an old-fashioned network with "reliable" serial datalinks (i.e. each hop has its own error/loss recovery), the network mostly takes care of packet loss for you (thought not entirely). In a more modern network, packets get lost all the time and the endpoints are almost entirely responsible for dealing with it - and that means end-to-end acknowledgements. But when do you send them? If you "know" the receiving end will respond to arriving data with some data of its own, you can hold off sending the ACK and roll it up with the data being sent in the opposite direction - that can be done by guesswork, but of course if you pass the responsibility up to the application layer it has a better chance of knowing the optimal timing.

      A more specific issue for QUIC is that HTTP/2.0 multiplexes individual data streams onto one TCP connection - a lost packet stalls every stream awaiting retransmission, even if data for only a single stream failed to arrive. Now of course, there are multiplexed transport protocols that don't suffer from this problem, but they all suffer from abstract vs. concrete compromise in terms of their chattiness and responsiveness.

      If you go for entirely textbook layering, there's quite a lot of state that needs to be kept too for each HTTP, TLS and transport connection, some of which disappears if you roll all the layers up together.

      The purpose of QUIC is essentially to come up with a transport+application protocol that is specific to the HTTP/TLS use case (though it has wider applications) and hence can be better optimised for that purpose. I can remember a time when the IETF would have been up in arms about layer violations, but I suspect this one is likely here to stay.

      1. Anonymous Coward
        Anonymous Coward

        "transport+application protocol that is specific to the HTTP/TLS"

        One good reason to avoid it, then.

    3. hammarbtyp

      Re: I don't get it

      Well TCP has this because it is bi-directional. QUIC can be both Uni and Bi-directional. In the latter the stream is used to run a state machine. It also offers some flow control.

      Its major advantage seems to be that data can be sent over multiple streams, increasing bandwidth and reducing error control. TCP was designed to flow over a thin pipe, so has a complex re-send mechanism. Networks have moved on a lot and this is not perhaps the optimum method of transferring data, especially if there is more data going in one direction than another. It is better to use the extra bandwidth to increase the number of channels, than rely on one channel and increase latency on failure

  2. Wibble

    Faster loading web pages!

    How about removing the megabytes of javascript driven cruft, the hundreds of (useless) images... It's like Javascript's the new Flash.

    Oh to have accessible pages again.

    BTW isn't UDP's other definition "Unreliable Datagram Protocol", i.e. fire and forget. Seems odd to add it to web pages. Makes sense for broadcast protocols though - the famous driverless cars telling all the other cars where they are.

    1. NiceCuppaTea

      Re: Faster loading web pages!

      Dont forget video and audio, UDP is perfect for those. Dropped a frame or 1/4 of a word? resend it so you get a random frame/word out of sequence or "sod it the user wont even notice" Not to mention the added latency and bufferring required for sending ACK's of every packet leading to weird pauses in conversations.

  3. ReadyKilowatt

    Error free wireless networks?

    The reason you ACK a TCP packet is because you have to assume the network might not be reliable. Wired Ethernet is mostly reliable. Wireless is not. Optimistic networking is probably a bad idea, and putting all of the reliability requirements on the network operators is definitely a very bad idea.

    1. Yes Me Silver badge

      Re: Error free wireless networks?

      I suggest reading the QUIC documents before shooting from the hip. None of what you say applies to QUIC.

  4. pmb00cs

    Another solution to a problem that shouldn't exist.

    As I understand it QUIC uses TLS over UDP so that the TCP overheads can be reduced to speed up the delivery of web pages. But that is only part of the story, because by using UDP you can send data in any order, ignoring the ordered nature of TCP, and have the application re-request any missing data, rather than having to wait for TCP stalling all data in the connection while it waits for the retransmission of a missing packet. Why is this an issue on the modern web? Because HTTP/2 multiplexes data streams within a single TCP connection, to speed up the sending of loads of separate files that are "needed" to make a modern web page. Why was that needed? Because some web pages are constructed using so many different js, css, html, and other files to construct that browsers were starting to hit limits in terms of the maximum number of TCP connections they could have open at a time in order to show one website. And after all this, the fastest websites to load, are still the ones that loaded fastest over HTTP/1.1, that consist of a html file and css file, maybe a small js file, and a handful of embedded images if necessary. We've managed to turn a method of sharing predominantly text into such a bloated mess, that it not only needs fixing, but the fix needs fixing.

    1. Down not across

      Re: Another solution to a problem that shouldn't exist.

      Your last sentence deserves more than just the one upvote I can give.

      Sadly I don't think its fixable. It (or the marketing deciding its a good idea) needs to be taken behind the shed and shot. Why so much documentation and information needs to be hidden behind javascript and other crap is unbelievable.

      1. Yes Me Silver badge

        Re: Another solution to a problem that shouldn't exist.

        Again, read the actual specs, not some quickly written 2nd hand news "story". The QUIC designers are rather a long way from naive or stupid. (I am not one of them.)

  5. Clinker

    Simply Fewer RTT's for Browsing

    The problem of slow-to-load web pages became acute when everyone began using https (TLS encryption) for the majority of web pages. Negotiation of TCP, then key exchange for TLS meant 4-5 Round Trip Times (RTTs) before any web-page data was downloaded. The same transaction using QUIC requires only one RTT*.

    There are many other changes from TCP, some have been mentioned above. Perhaps one of the most significant changes is that the QUIC protocol runs from user-space, unlike TCP which belongs to the kernel (root). This also reduces the time it takes to put pixels on the screen by reducing memory transactions.

    QUIC improves life for Google, obviously; but this is perhaps one of the cases where what's good for Google is good for us too! ;-)

    *Zero RTT is possible with reuse of previously used TLS keys, thus removing key-negotiation; see TLS 1.3; there can be man-in-the-middle risks.

  6. This post has been deleted by its author

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like