back to article IETF publishes HTTP/3 RFC to take the web from TCP to UDP

The Internet Engineering Task Force on Monday published the RFC for HTTP/3, the third version of hypertext transport protocol. As explained in an IETF summary: The QUIC transport protocol has several features that are desirable in a transport for HTTP, such as stream multiplexing, per-stream flow control, and low-latency …

  1. Warm Braw

    TCP needs a few back-and-forths

    And TLS needs some more. The main gains in QUIC come from merging and streamlining the transport and security layers and from the ability to multiplex multiple data streams within one "connection".

    The biggest benefits will come when retrieving "pages" that have lots of distinct elements coming from the same source.

    TCP implementations are usually outside user space, which gives the OS some control over the fair scheduling of resources. QUIC implementations are currently mostly part of the application (in this case the browser) and it will be interesting to see how well-behaved they are if they find wider uses.

    1. David Harper 1

      Re: TCP needs a few back-and-forths

      "The biggest benefits will come when retrieving "pages" that have lots of distinct elements coming from the same source."

      Isn't that what the HTTP "Keep-Alive" persistent connection feature was designed to deliver, way back in the late 1990s?

      1. Warm Braw

        Re: TCP needs a few back-and-forths

        The difference is that a (single) persistent connection can only retrieve resources serially whereas QUIC allows many to be retrieved in parallel - in theory with less overhead than multiple TCP connections.

        1. David Harper 1

          Re: TCP needs a few back-and-forths

          I suspect the limiting factor is the bandwidth of the wifi or mobile connection, so the user experience will not be improved. After all, however many parallel streams QUIC conjures up behind the scenes, it all has to go across the same wifi or mobile connection.

          1. RichardBarrell

            Re: TCP needs a few back-and-forths

            The round trips which HTTP 3 shaves off relative to HTTP 1 and HTTP 2 are pure overhead, with both ends of the connection just sitting around and waiting. At the start of a page load, your network connection is mostly idle for several RTTs (think dozens to hundreds of ms, depending on latency) while both sides wait for lookups and handshakes to happen before they start shoving bytes down the connection in earnest.

        2. sreynolds

          Re: TCP needs a few back-and-forths

          Wasn't it the state of the art to have 4-8 connections in parallel (if I remember correctly that was the reason the JavaScript people - the ones that should have been kept in the dungeon in the basement - always asked for the 0.site.com 1.site.com .. CNAME records - which ended up being A records as CNAMES suck) which in effect was almost 95% optimal. That should be the baseline for your comparison - not some google propaganda.

          Having not read the RFC I guess that there is a way in which the path MTU is discovered - at least the router's won't have to (nor be able to) take care of that

          1. Peter Mount
            Boffin

            Re: TCP needs a few back-and-forths

            That was the case when delivering maps online, think openstreetmap etc

            Although you might have the server maps.example.com before http/2 it was best to have a.maps.example.com, b.maps.example.com, c.maps.example.com & d.maps.example.com so that you could request the map tiles across all of them & get more throughput as browsers limited it to 4 connections per domain name.

            Since http/2 however, it's now better just to have the original maps.example.com site as it can send just as many down the single connection. Most libraries now support this, so if the connection is http/2 it ignores the alternates.

          2. sreynolds

            Re: TCP needs a few back-and-forths

            Well the theory went that if it was a different host then more bandwidth was available for the downloading last amounts of javascript code too. There was some magic six second number that was pulled out of someones arse about web page load time.

            Anyhow isn't it possible for a router to delay acks to modify a TCP connection's bandwidth? How is this going to work with QUIC. Once again it seems that google only cares about google and everyone else who runs the infrastructure can go and get stuffed.

      2. RichardBarrell

        Re: TCP needs a few back-and-forths

        Yes. HTTP 1 keepalive mitigated some of the problem. There's a lot more to it besides. HTTP 1 has a lot of other issues with the way it uses TCP that lead to dead time where the network isn't being used well.

      3. martinusher Silver badge

        Re: TCP needs a few back-and-forths

        >Isn't that what the HTTP "Keep-Alive" persistent connection feature was designed to deliver, way back in the late 1990s?

        TCP doesn't do dropped connections that well. The 'keep alive' is a kludge designed to prevent sockets from timing out.

        1. david 12 Silver badge

          Re: TCP needs a few back-and-forths

          It's not just sockets now. IPv6 "ARP" caches time out in 15~45 seconds, and the "Keep-Alive" signal maintains that too. (IPv6 LL, not really ARP)

        2. Michael Wojcik Silver badge

          Re: TCP needs a few back-and-forths

          HTTP persistence is unrelated to TCP keepalive.

          HTTP persistence (which was non-standard for HTTP/1.0, and enabled by default for HTTP/1.1) lets the TCP conversation remain open after the server completes the response, avoiding the need to establish a new conversation. It also permits pipelining (sending multiple requests without waiting for the responses) and expectations (preliminary responses), though clients generally avoided pipelining since servers might not support it, and expectations probably caused as much trouble as they relieved.

          TCP keepalive periodically tests a connection. Per the Host Requirements RFCs, the defaults for keepalive are so large that it's irrelevant for the vast majority of HTTP use anyway. And TCP itself deals with dropped connections just fine; keepalive was really for dealing with FPL (distributed systems *must* eventually time out non-responsive nodes) and keeping transient IP transport links such as SLIP up.

      4. Kevin McMurtrie Silver badge

        Re: TCP needs a few back-and-forths

        HTTP 1.1 also supports pipelining. It's REALLY fast but many implementations don't bother with it. Both the client and server must implement it for full speed.

        But hey, I'm sure that inventing a new layer for UDP will go flawlessly. It's weird that nobody has tried that until now.

        1. Michael Wojcik Silver badge

          Re: TCP needs a few back-and-forths

          Eventually, all distributed-application developers reinvent TCP over UDP.

          And all well-funded organizations with network-sensitive revenue streams push mechanisms for breaking network fairness through standards bodies.

    2. Phil O'Sophical Silver badge

      Re: TCP needs a few back-and-forths

      Doesn't UDP make address spoofing easier, though? Since there's no ack/connection, it's trickier to validate the source address in a packet. That would have to be done at a higher level, not that of the IP layer.

      1. Warm Braw

        Re: TCP needs a few back-and-forths

        It shouldn't in principle - it depends on the implementation of course but, for example, there's a difference between SOCK_DGRAM and SOCK_RAW.

        QUIC, however, is intended to cope with handover from, say, a mobile data connection to a WiFi connection and uses a separate source identifier to the IP address so that the (QUIC) connection can persist even as the source IP address changes as the underlying interface changes.

        1. bazza Silver badge

          Re: TCP needs a few back-and-forths

          QUIC dealing with handover between connections is kinda nuts. Isn't that what IPV6 is supposed to be a about, the ability to have a globally unique Ip address no matter what network you're connected to?

          Most mobile networks are IPV6 already, so it's not a case of waiting...

          1. Mike007 Bronze badge

            Re: TCP needs a few back-and-forths

            IPv6 gives you a global address on say your mobile interface and another one on your WiFi interface. It doesn't give you a static address that roams between the 2.

            There is a mobility thing for roaming IPs. I have no idea if there is any software to support it...

      2. RichardBarrell

        Re: TCP needs a few back-and-forths

        No difference in this case because HTTP 3 builds a connection on top. When you're using HTTP 3 there's still a handshake implemented on top of UDP.

        What you describe is an issue with protocols like DNS that send a response out immediately in reply to a single packet.

        1. Anonymous Coward
          Anonymous Coward

          Re: TCP needs a few back-and-forths

          tcp is just fucking handshake/retry implemented on top of udp basically.

          so why the fuck re-invent the fucking wheel.

          1. RichardBarrell

            Re: TCP needs a few back-and-forths

            The old one was hexagonal. They're trying for at least 12 sides this time. Hopefully it'll be quieter and smoother.

    3. TheMeerkat

      Re: TCP needs a few back-and-forths

      Since we have HTTP/2 that solves the issue of needing multiple connections, what is the point other than Google trying to make its own Internet?

      1. Richard 12 Silver badge

        Re: TCP needs a few back-and-forths

        Head-of-line blocking, for one.

        TCP is a pipeline. Everything arrives in-order, so you have to wait for each entire item to be delivered before you can get the next one. So you can't get the second image until the first one has arrived.

        QUIC gives every item its own pipeline, so you can start downloading all the images at once and pause/continue/cancel them individually.

        This can greatly improve the user experience, even if the throughput and total load time was the same.

        - A browser can get the headers of all the images before flowing the page, thus avoiding the annoying reflows as images arrive.

        - It can pause download of items that turn out not to be needed yet (eg they're not yet visible), to ensure the ones that are needed first get the bandwidth and arrive first.

        - If the user never does the thing that would use those items, then it may never resume the download at all.

        Of course, most of this is only relevant in a browser because of scripting. JS means the browser has no idea what the DOM is actually going to look like until it's executed.

        Static pages usually have an "obvious" order.

        1. Anonymous Coward
          Anonymous Coward

          Re: TCP needs a few back-and-forths

          this is all just pissing about to save micro secs, fucking stupid.

        2. Anonymous Coward
          Anonymous Coward

          Re: TCP needs a few back-and-forths

          > thus avoiding the annoying reflows as images arrive.

          That's what the "height" and "width" attributes are for. It seems like we're just fixing sloppy html practices!

  2. Harry Kiri

    Optimisation...

    OK, yeah, I can see both sides of the argument here, but personally I like the transport layer to be not closely coupled with the application as this is a bad idea long-term. Whenever different system elements are closely coupled to give an integrated improvement in performance, that's good for today and less so for all of the tomorrows. Through-life support and all that.

    Plus, as I've got older, the first and second rules of optimisation make more and more sense.

    1. Warm Braw

      Re: Optimisation...

      The issue is that one size may not fit all. The way TCP works may be useful for the average application, but it works against specific applications, particularly real-time media streaming.

      There are already other application-coupled transport protocols which have significant deployment (SRT and RTP spring to mind) and there have been various other protocols that had some of the features of QUIC (SCTP and SST, for example).

      It's an interesting subject for debate at what point any of these become sufficiently mature and ubiquitous that it deserves a stack of its own.

      1. Roland6 Silver badge

        Re: Optimisation...

        >It's an interesting subject for debate at what point any of these become sufficiently mature and ubiquitous that it deserves a stack of its own.

        If I understand the benefits of QUIC and what is being suggested then we can expect the majority of client applications to have their own QUIC implementations, so looking at my current laptop and what I have running that will be multiple QUIC implementations concurrently in memory:

        Chrome, Edge, Firefox, Outlook, Teams, Zoom, Onedrive, Windows native...

        1. Richard 12 Silver badge

          Re: Optimisation...

          The OS may well provide a common implementation if it takes off sufficiently.

          That said, the applications that want these "special" stacks (almost all built on UDP) are mostly cross-platform, and it's difficult to justify "Use the OS one on Windows and Linux, and write our own implementation on macOS", especially when there are going to be different bugs in each.

          The problem at the moment is that that Google's reference implementation uses their internal build tool that nobody else uses. (Or wants to, because the problem it solves are specific to the way Googlers work)

          1. Roland6 Silver badge

            Re: Optimisation...

            >and it's difficult to justify "Use the OS one on Windows and Linux, and write our own implementation on macOS" especially when there are going to be different bugs in each.

            That is why a POSIX style defined API is required, it will also reduce and simply the interop.

            It will also mean the QUIC layer will more likely be maintained by systems programmers rather than application programmers...

      2. Arthur the cat Silver badge

        Re: Optimisation...

        I've long thought it a shame SCTP never really took off as a universally supported standard.

        1. Chris Hills

          Re: Optimisation...

          The reason oftem cited for avoiding SCTP is due to legacy middle boxes that do not know what to do with it. If only they had concentrated on IPv6, there would be much less need for them.

      3. Anonymous Coward
        Anonymous Coward

        Re: Optimisation...

        Go back far enough and no one used TCP for streaming, there are protocols like RTP for that. Based on UDP. Anyone remember RealPlayer? Flash?

        It was Google that decided to use http for streaming video in YouTube, to make a Web browser version that didn't need Flash player...

        The wheel is being reinvented, but with some security improvements. And that's about all it is, really.

        There's nothing particularly wrong with this. Most OSes do TCP at the application library level, and this is just more network shenanigans being implemented within applications. It's only really Linux where this does not make sense, because that does things like TCP in the kernel. So there will be a split between where network data consumption points are is going to be, mixed between the system interface and also somewhere in the application libraries.

    2. Roland6 Silver badge

      Re: Optimisation...

      There is a lot to be said fo getting the transport layer out of the application and user space and back into the common/spared services OS space. It would help both with application bloat (remember Chrome already includes its own secure DNS service) and with fault finding.

      However, for this to happen requires another POSIX style intervention and the definition of a QUIC API. This is not really within the scope of the IETF RFC process. Back in the 1980's X/Open stepped into the breech over Unix API standardisation; somehow I don't see The Open Group (which X/Open morphed into) doing similar now.

      1. Anonymous Coward
        Anonymous Coward

        Re: Optimisation...

        Maybe, maybe not - I've implemented other protocols in kernel land and used standard posix functions (albeit needing to add to the headers for the new protocol types when opening up the socket).

    3. martinusher Silver badge

      Re: Optimisation...

      The main issue is that HTTP frames (badly) a datagram oriented protocol on top of what is an inherently inefficient stream protocol. To an applications programmer this means little -- they just open a socket for a 'reliable' connection and it behaves like a full duplex serial port. The implementation's a nightmare, its actually amazing that it works as well as it does. Switching to datagram orientation greatly simplifies transactions and makes everything vastly more efficient.

      1. James Anderson

        Re: Optimisation...

        So now the application has to detect dropped packets, request resends and correctly reorder out of order packets.

        Big gains especially over a dodgy mobile phone connection.

        1. Richard 12 Silver badge

          Re: Optimisation...

          There's a lot of purposes where you don't care about dropped packets, because it'd be outdated by the time a replacement arrived anyway.

          Browsers do a heck of a lot of audio and video streaming.

    4. Version 1.0 Silver badge
      Facepalm

      Re: Optimisation...

      Looks like this will optimize malware deliveries as well. All these companies suggesting this will be an "improvement" update are probably just seeing it as a way to make more money from users, not actually assist users. It will be much easier to deliver bigger adverts and collect user data as users do more "faster".

  3. Spamfast
    WTF?

    Slow, verbose, flaky, TCP.

    What is the contributor taking? Simple, robust, reliable, extensible. TCP has been remarkably successful, even when used for things it probably shouldn't such as real-time video.

    It doesn't fare well over a few link protocols (notably itself) but it's coped amazingly well with all new point to point & packet switching tech including hostile ones such as ATM over asymmetric DSL and V.90.

    1. anothercynic Silver badge

      I was going to comment on the very same. If anything, TCP is somewhat less 'flaky' than UDP in that it actually tells the sender that a packet has been received, whereas UDP is a send-and-forget protocol.

      But hey, what do I know... I just work with TCP and UDP all bloody day long.

      1. Jellied Eel Silver badge

        ...it actually tells the sender that a packet has been received

        This. So shifting to UDP transitions to spray & pray. Then add multiplexing to try and pack more UDP into a single connection.

        So when there's congestion, and packets are discarded, multiple sessions will fail. That could make events like Tickemaster flogging tickets to some band more fun given they attract high surge volumes. Then it'll be down to the app to re-try the session, which will add to the congestion.

        I also suspect for the average user, the latency benefits are minimal given it's the human's tolerance for page load times that's the main factor. Web browsing isn't really that much of a low-latency requirement. Where it'll probably have more impact is with typical web pages 'requiring' 20-30+ sessions to multiple destinations where ads, trackers, cookies and general data rape occurs in the background.

        Promoters of QUIC will no doubt be fine with this, ie Google, Cloudflare, MS etc could proxy those sessions.

        Also curious how this might affect 'net neutrality discussions. So this will increase congestion and degrade performace due to packet loss, so there's a certain logic in potentially being able to prioritise traffic. But somehow, I can't see Google prioritising ad/analytics traffic over user requsted content, because that would violate neutrality. Even if Google's probably doing that already.

        1. This post has been deleted by its author

        2. SImon Hobson Bronze badge

          Where it'll probably have more impact is with typical web pages 'requiring' 20-30+ sessions to multiple destinations where ads, trackers, cookies and general data rape occurs in the background.

          Yeah, there's a real correlation between pages that are dog slow to load, and pages that have more ads than content.

          The real answer would be for web site designers to actually design sites to load quickly, rather than to stuff as many bits of cut-n-paste tracking & data stealing code into it as they can. One can but dream ...

          But one thing we can be sure of, as soon as anyone comes up with a faster pipe, there will be web sites quick out of the blocks to use up that extra speed to pump more sh*t at us that we don't want.

        3. Robert Carnegie Silver badge

          But comms may have to go most of the way around the world, and back. And practically, at somewhat less than the speed of light.

      2. The Mole

        But the implementation of TCP acknowledgement is implemented as a single stream with head of line blocking. One lost packet effectively pauses everything until the retransmission happens. (Well ok its a bit more complicated than that but the simplification is close enough to reality).

        QUIC builds acknowledgement on top of UDP (in the same way TCP builds it on top of ip). This means it has greater flexibility to evolve more complex acknowledgement protocols - such as allowing traffic for other substreams to continue and only holding up the subs-stream with the lost packet, or deciding its a real time video stream and its better just to continue and let the error handling in the video decoder handle some missing data.

        The designers of QUIC basically had 3 choices:

        1. Build it on top of TCP just like HTTP and HTTP2. This meant all the problems and limitations of TCP, especially related to flow control.

        2. Create a new protocol on top of IP alongside TCP/IP and UDP/IP (QIC/IP), Architecturally this would have been the cleanest approach, but would require all networking equipment and stacks to be updated to support it, we have seen how that has worked for IPv6

        3. Layer it on top of UDP so that it can be used on the existing internet infrastructure, but create a new connection orientated protocol - QIC/UDP/IP

        Option 3 was definitely the wisest decision, but it does cause confusion as people assume that means it 'is' UDP with its limitations, rather than the reality of its building something new on top of UDP for convenience.

        1. anothercynic Silver badge

          I am not arguing the choice of carrier protocol for QUIC.

          I am however arguing the lazy word choice describing TCP by a tech journalist who should know better, because TCP is not what it is described as. That it has limitations is one thing, describing it as crap (I paraphrase very broadly here) because of certain design requirements (which led to the limitations that now constrain the modern Internet in ways the inventors of TCP didn't predict 40+ years ago) is quite another.

    2. Crypto Monad Silver badge

      It's a shame that SCTP didn't get wider support. It does all the multiplexing stuff, and remains part of the OS so it protects the network against bad protocol implementations.

      1. Richard 12 Silver badge

        No OS support

        I've wanted to use it many times. But I can't, because it's not available on some of the OS I support, and I can't do it in userland.

    3. RichardBarrell

      TCP was a very good design. No disrespect.

      In a modern world, it's unfortunate that TLS is layered on top of TCP rather than integrated, so you get 2 handshakes before anything starts really happening. If you were designing TCP again from scratch in this century, you'd consider merging the handshaking parts of TLS into it to save the RTTs. Or changing the interface a bunch so that the TLS handshaking could piggyback on the same packets as the TCP handshaking.

      Also TCP has always had problems over flaky connections with head-of-line blocking.

      1. Jellied Eel Silver badge

        Also TCP has always had problems over flaky connections with head-of-line blocking.

        But that may be preferable to head-of-line dropping, which will happen with UDP. Then the apps need to figure out where their packets have gone.

        1. The Mole

          See comment above. UDP doesn't do head of line dropping, it does packet dropping. The protocol designer on top of UDP is free to implement their own flow control and retry mechanisms just as TCP does over IP.

          The benefit is sometimes head of line blocking is what you want, other times skipping lost packets is what you want, QUIC can allow both modes of operation by the client unlike TCP which mandates the behaviour whether you like it or not.

          1. Jellied Eel Silver badge

            See comment above

            Ok, so..

            But the implementation of TCP acknowledgement is implemented as a single stream with head of line blocking. One lost packet effectively pauses everything until the retransmission happens. (Well ok its a bit more complicated than that but the simplification is close enough to reality).

            Kind of, although as you say, reality gets a bit more complicated. So figure on this example-

            https://en.wikipedia.org/wiki/Head-of-line_blocking#/media/File:HOL_blocking.png

            So yes, there's a risk of head-of-line issues, if the receiver is busy. But TCP provides more feedback to the apps of the connection state, should the developer choose to use that data. With TCP, a lost packet won't pause 'everything', it should only pause that specific TCP connection. Which admittedly gets FUN if the app is attempting to use parallel TCP connections for a single transfer.

            The protocol designer on top of UDP is free to implement their own flow control and retry mechanisms just as TCP does over IP.

            It's not really a case of 'free to implement', more essential to implement, especially if your app isn't loss-tolerant. But consider the wiki pic again, just with the 'switching fabric' being replaced with the Internet. Session 4 is people trying to get tickets from Ticketmaster, which is congested, so random UDP packets will be merrily filling the bit bucket.

            So there'd be multiple potential blocks. The host, while the app tries to figure out what packets were lost and thus which packets to re-transmit, or re-request. Then peering locations, where there's frequently congestion and packet loss, but the peering routers have no knowledge of the application, or request state, and finally the server at the far end that may be congested, drop packets, but has a better chance of being 'app aware'.

            Then add in multiplexing to try and cram more data into a single 'connection', and dropped packets will result in apps having to figure out how that impacts the muxed transport, and trying to re-request the lost data. So computationally far more expensive, and potentially leading to higher latency while the app tries to figure out what the hell is going on.

            Meanwhile, buffers are still filling up, LIFO, FIFO or WRED is merrily dropping more packets and goodput falls through the floor. Especially if retransmission is occuring end-end, ie both host and server are in the middle of this mess. Basically it'll create huge spikes in retransmission/recovery any time there's congestion.

            It's pretty much why real-time stuff like voice and video tends to run with a TCP control session so the apps have at least some chance of monitoring and managing link state. It's also probably not something the 'network' can fix. So an app may be able to prioritise sessions it thinks are important, but it won't be able to signal that to the network, ie routers. That implies prioritisation at the network level, which according to 'Net Neutrality fans is a very bad thing.. Even though prioritising real-time transmissions is arguably a good thing.

            So it's a little strange. We know network connections are frequently congested, packet loss is common, so any new protocol that promises to improve performance by creating more congestion problems seems a bit pointless.

            1. doublelayer Silver badge

              "So an app may be able to prioritise sessions it thinks are important, but it won't be able to signal that to the network, ie routers. That implies prioritisation at the network level, which according to 'Net Neutrality fans is a very bad thing.. Even though prioritising real-time transmissions is arguably a good thing."

              As one of those advocates, I don't think that prioritization of any kind is always bad. I think that allowing an ISP to prioritize as it wishes is a bad thing, because I know how ISPs like to give users substandard service and gouge them to get things back. If you have to prioritize traffic very often, it means you don't have enough resource to handle all the traffic that's going through your system. For your personal or business systems, this is a thing you can deal with by provisioning more resource or moving stuff around to deal with the limited availability, because the thing that suffers from deprioritization is also yours. You have to deal with the tradeoffs and can decide whether it is bad enough to invest in more capacity. An ISP deprioritizing a user isn't the same, because it is the user that suffers and the ISP would be happy to make them pay more to get their service back (in return for picking a new person to suffer), and thus they would have an incentive never to fix those problems.

              As for apps prioritizing their own data flows, you can do that without network knowledge. There are various ways to make connections run slower than they otherwise would, and a performance-sensitive program can do that to nonessential connections. Servers doing that to clients is also possible though done less often. I also wouldn't object to a protocol where network devices can be told to deprioritize something by the endpoints alone, though I question how useful that would be.

              1. Anonymous Coward
                Anonymous Coward

                In very late versions of MSDOS, Microsoft did this. Its been 30 years but I remember being able to break my co-workers application just by running a file copy from pc1 to pc2. We discovered that all of the packets that made up the file copy had the bits set and all of the important application traffic....didn't. Running a non-Microsoft application at the same bitrate wouldn't cause the bits to be set and their application worked just fine...

            2. Richard 12 Silver badge

              And the point of QUIC is that you, an application developer, can get the benefit of multiple, independent streams over multiple paths and multiple NICs, some reliable like TCP, some unreliable like UDP, without having to deal with the horrible mess of synchronisation.

              You open one connection, and the protocol stack handles all that mess.

              Yes, you can do the same thing by opening multiple TCP and UDP sockets and rolling your own monitoring, sharing and migration between NICs.

              I've done it, poorly - migration between NICs is hard. QUIC does it far better than I have the time or indeed knowledge to do it.

        2. heyrick Silver badge

          "Then the apps need to figure out where their packets have gone."

          This.

          The joy of TCP is that I just open a connection and throw some data at it, then await a response. Granted, it isn't anything special like streaming video, but all the same the data goes in and different data comes out and all the magic that makes it work is absolutely not my problem.

          To make things like flow control and resends become my problem, sounds like running in reverse.

          1. Richard 12 Silver badge

            Straw man

            As an application developer, you don't handle any of the reliability stuff. It is NOT a raw UDP connection.

            You just open a connection, then open substreams and set the flags for each as to whether you want each substream to be ordered and/or reliable.

            The fact that at the moment it happens to be implemented as a userland library is irrelevant.

            Back when TCP was developed, it was entirely in the application too. Heck, IP was handled inside the application at the beginning.

            The OS eventually took that over, because lots of applications wanted TCP/IP.

      2. jay8000

        > on top of TCP rather than integrated

        its called tcp fastopen and was solved more than 10 years ago

    4. Charlie Clark Silver badge

      To be fair, I think the statement is supposed to be ironic. TDP is robust because it's verbose but UDP has for years been preferred for streaming.

    5. fibrefool
      Boffin

      real-time video over TCP?

      good luck with that. If I'm watching the footie on cable, and my mate's streaming it, I have to remember not to text him the moment a goal is scored.

      and as for videoconferencing...

      and re TCP over ATM that's not really 'hostile' in the ADSL case (IIRC you generally have a UBR ATM VP shared between multiple subscribers and then the BRAS will shape traffic into that). If you're talking ATM ABR circa 1996 (e.g. StrataCom "Foresight" trying to do closed-loop congestion control) then that's a different story...

  4. Crypto Monad Silver badge

    RFC

    RFC stands for "Request for Comments" – meaning HTTP/3 awaits final signoff

    Not really. Once it's been published as a standards-track RFC ("proposed standard"), in effect it's already signed off. Prior to this it would have been a series of "Internet Drafts"

    For comparison: RFC2822, which is the still the primary standard for E-mail from 2001 (and replaced RFC822 from 1982), remains a "proposed standard".

    1. Arthur the cat Silver badge

      Re: RFC

      Quite correct. They were originally called "Request for Comments" because many of them were written by postgrads or other non-management types and they thought they couldn't get away with calling them standards because of managerial backlash.

    2. diodesign (Written by Reg staff) Silver badge

      RFCs

      Yeah, got it - thanks. We've corrected the piece. Thanks for the feedback.

      Next time you spot something wrong, please drop us a note too to corrections@theregister.com so we can get on to it right away.

      C.

  5. Anonymous Coward
    Anonymous Coward

    Truly, madly, deeply …

    … suspicious of anything emerging from Mountain View (even if it's an airship).

  6. Jan K.

    "TCP ... therefore produces long round-trip times which can translate into a poor user experience."

    Am I too cynical or ignorant thinking that every time google & the gang speaks of "good user experience", it's not really about the user?

    Okay, I'll admit being ignorant when it comes to network protocols. But cynical too??

    1. Michael Wojcik Silver badge

      It's a poor user experience under these assumptions:

      1. Your stuff is horrible RIAs / SPAs that continually chat with the server.

      2. Your stuff is crammed full of pointless crap.

      3. Your stuff is more important than anyone else's stuff.

      So from Google's point of view, yeah, it's all about the user experience.

      Essentially, when Microsoft invented XHR, and then Google adopted it and convinced everyone to use it (with the help of web designers), that pretty much killed any hope of reasonable use of HTTP. Thus we lost a fairly decent protocol for document download with a modicum of interactivity (HTTP/1.1) in favor of ever-more-complicated and arcane solutions to the problems created by tech giants.

  7. rjed
    Go

    QUIC can do what TCP cant

    Usually I hear folks debating the performance (throughput, latency) aspects of things when QUIC is introduced. I don't think QUIC can improve throughput since that is primarily handled by congestion control and flow control algorithms which are mostly same on both TCP and QUIC. QUIC had some advantages when it comes latency improvement just because it has deeper integrations with TLS 1.3 and supports features such as session resumption and 0-RTT handshake.

    But I would say those would not be my primary reasons to move to QUIC. QUIC can do some things which TCP can never do (and TCP cannot do it because of its ossification in existing systems). Things such as:

    1. handling partial reliability: TCP is a fully reliable transport protocol. Lot of times we need partial reliability for scenarios such as gaming, live streaming. For e.g., within a video stream, you might want full/better reliability for I-frames but lower reliability for P/B frames. Infact P/B frames towards the end of GOP (group of pictures) could have much lower reliability. Today if an app uses a tcp-send, you cannot then drop it. TCP will try to resend it till it manages to get it delivered. This counterproductive in the scenarios I mentioned. In a live-stream, if you cannot deliver the P/B frame within a second (for instance), then it is best to drop it since video decoder will anyways extrapolate and manage to get it recovered. Retrying even after few seconds will result in traffic clogging impacting subsequent traffic. QUIC can support such modes.

    2. improved multipath transports: MPTCP (multipath) suffers from lot of design constraints because of TCP ossification. QUIC can do much better with multipath. As an example, once a segment is transmitted on a TCP path within a connection, that segment cannot be rescheduled to be transmitted on another path within the same TCP connection since middleboxes expect all the TCP segments to arrive (because of full reliability). QUIC doesn't suffer from such limitation.

    3. notion of streams: TCP does not support notion of streams and thus an application has to initiate multiple TCP connections for each stream. QUIC's design is much closer to app.

    4. future extension: QUIC can be extended without having to worry about ossification. QUIC is smartly designed so that intermeditate routers/switches cannot read the packet and cannot make a decision based on a specific bit within the packet. This means we will see innovation at transport layer. Today the innovation with TCP is stalled because TCP has to work within the constraints of middleboxes which have ossified implementation. QUIC has ensured that this ossification wont happen with its design.

    Using QUIC in kernel space or user space is a systems issue. Today one can use TCP in userspace as well but most apps prefer to use existing kernel space implementation. The same will be true for QUIC since app devs will want to use existing implementation in favor of deploying their own.

    1. Roland6 Silver badge

      Re: QUIC can do what TCP cant

      But QUIC isn't a transport protocol as it uses TCP and UDP.

      QUIC is a session management and transport orchestration procotol.

      1. Richard 12 Silver badge
        Headmaster

        Re: QUIC can do what TCP cant

        TCP and UDP aren't transport protocols because they use IP.

        IP isn't a transport protocol because it uses ethernet frames (or the wifi etc equivalent)

        It's turtles all the way down.

        Definitions like that aren't useful. It looks like transport and smells like transport.

        1. Roland6 Silver badge

          Re: QUIC can do what TCP cant

          >It looks like transport and smells like transport.

          TCP and UDP definitely look and smell like Transport layer protocols, QUIC has the distinct smell of a Session layer protocol, even though it implements features (such as the orchestration of multiple transport sessions/streams) I don't remember reading about in the OSI session layer specifications.

    2. RichardBarrell

      Re: QUIC can do what TCP cant

      > 4. future extension: QUIC can be extended without...

      Something I'm cautiously optimistic about is that at the moment some ISPs mess around with UDP traffic, and I'm hoping that widespread deployment of QUIC / HTTP 3 will punish that severely, forcing them to stop doing that. I think this would have the knock-on effect of making the internet more friendly to future innovations by other parties.

    3. Anonymous Coward
      Anonymous Coward

      Re: QUIC can do what TCP cant

      QUIC can be extended without having to worry about ossification.

      Err, once it's a standard, and implemented by every man and his dog (or at least, servers and clients) then it will be ossified. Perhaps less so than TCP, but there is no doubt that it will be hard and slow to change it - update your servers, it can't talk to existing clients, so it'll have to support the previous version(s), and so most fo the time you'll be wasting effort by supporting the new version (duplication of code etc to handle two different cases).

      QUIC is smartly designed so that intermeditate routers/switches cannot read the packet and cannot make a decision based on a specific bit within the packet.

      And that's going to be a problem too - since in many situation there is a requirement to be able to do exactly that. Sometimes it's policy, sometimes it's legal - but for many people it is not optional. At ${day_job}, ALL of our internet traffic goes via a gateway guardian - and if it didn't then we'd not be allowed internet access at all (or possibly forced to use two different segregated networks; one secured and with zero internet; the other not usable for most work but with internet access).

      So you can be certain that the people who make this sort of kit will already have QUIC support, and content mangling will be possible.

      Of course, there is the easier option - just drop all QUIC traffic. For a long time, servers and clients will need to be able to fall back, so users won't notice it's absence and you can carry on applying TCP controls.

      1. The Mole

        Re: QUIC can do what TCP cant

        Not quite. The issue that QUIC tries to resolve is where client A and server B both support feature X of TCP, however because box x in the middle does some 'manipulations' they can't actually use it due to the box in the middle breaking the situation, even though the negotiation to activate the feature succeeded.

    4. Roland6 Silver badge

      Re: QUIC can do what TCP cant

      >QUIC is smartly designed so that intermeditate routers/switches cannot read the packet and cannot make a decision based on a specific bit within the packet

      A question has to be asked as to whether QUIC hinders content filtering such as adblockers.

      Also whether it can be used by users to give a better response and user experience eg. force a download priority on content: text first, article images second, third-party stuff such as ad's last or never. given Google sees adblockers as a "revenue sinkhole", I suspect QUIC makes content filtering more difficult.

  8. Spamfast
    Facepalm

    Think of the little ones.

    I often work with severely constrained platforms - mostly 32-bit Arm these days but nonetheless with available RAM in the hundreds of kilobytes not megabytes and hundred megahertz not gigahertz CPUs. Stacks like lwIP/mbedTLS can handle this with HTTP on top. I'm not sure I'd like to try adding QUIC. So now the server end will have to implement two parallel interfaces for the first and second class clients. I can understand Apache's hesitation.

    1. Richard 12 Silver badge

      Re: Think of the little ones.

      Servers will need both during any transition period for any protocol.

      As that transition period is "until there are no more old clients", it's basically forever.

      Same is true of IP, of course.

  9. jay8000

    tcp fastopen

    tcp fastopen with ssl works fine here. It also supports the same 0-rtt handshake as http/3

  10. Zippy´s Sausage Factory
    Devil

    "Microsoft also liked QUIC so much it created its own version..."

    Ah yes - embrace, extend, extinguish...

    "... and open-sourced it."

    Wait... that's not the Microsoft I remember...

  11. Anonymous Coward
    Anonymous Coward

    I'm using QUIC now to access the register

    I wondered how 'The Register" is so snappy on this computer. Seems like I'm using quic already, according to tcpdump.

    It's like local-lan fast.

  12. simpfeld

    The end of the IP protocol number field?

    I see why they aren't doing QUIC/IP or QIC/IP as current routers wouldn't implement it, but this effectively renders the IP protocol number useless.

    A new protocol layered over UDP (as we have layered a load of crap on port 443) just more layering inefficiency.

    Could this not have been a full IP protocol with drop back to UDP if this wasn't available?

    1. Richard 12 Silver badge

      Re: The end of the IP protocol number field?

      Yes, as I understand it this was considered, but nixed for two addtional reasons beyond router support.

      The security models mean a userland application cannot send or receive raw frames on any common desktop or mobile OS.

      This would greatly limit uptake, as on Linux or Windows kernel-mode stuff can only be installed by root/admin. It's even harder on macOS, iOS and Android.

      The other thing is that a lot of NICs (especially on servers) do a lot of the UDP (and TCP) stack in hardware, and are heavily optimised for that. Skipping it would mean doing that work on the CPU. Less of an issue now that "Smart NICs" are a thing, but back then?

      1. Anonymous Coward
        Anonymous Coward

        Re: The end of the IP protocol number field?

        It's a shame.. We aren't able to extend IP the way it was designed due to close-minded implementers.

        Mind you, ipv6/ip (ip proto 41) is a thing that works at least.

    2. Roland6 Silver badge

      Re: The end of the IP protocol number field?

      The IP protocol number field is doing just fine - currently routers are really only required to implement IPv4 and IPv6; the protocol number field just permits traffic to be handed to the appropriate IP handler.

      It makes sense for QUIC to be a TCP/UDP overlay, as this permits a client to easily use some of the stream orchestration features against systems that don't have native QUIC support. It also means the overlaying application only needs to hand stuff over to QUIC and not be concerned about streaming details. Ie. if QUIC were to be a full transport layer replacing TCP/UDP then the application strictly needs to be able to handle the use and non-use of QUIC.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like