back to article If you need a TCP replacement, you won't find a QUIC one

Some might say there's a possibility QUIC will start to replace TCP. This week I want to argue that QUIC is actually solving a different problem than that solved by TCP, and so should be viewed as something other than a TCP replacement. It may well be that for some (or even most) applications QUIC becomes the default transport …

  1. Richard 12 Silver badge

    Multiple semi-dependent streams

    Is the "killer feature" I see in QUIC.

    There's an awful lot of applications where you need to send & receive a lot of independent streams of data, but also need to know whether the aggregate of the streams is working.

    Doing that over TCP/UDP means either:

    A) Opening multiple sockets and having some supervisory process keeping track of whether any of them have stalled for "too long" and taking action on the other streams if they have. That's a lot of SSL going on.

    B) Opening a single TCP stream and living with head-of-line blocking.

    QUIC gives that aggregation without head-of-line blocking, with a single SSL token.

  2. AlanSh

    Not sure about this

    My thoughts would be that RPC is one abstraction "above" TCP and will use TCP as its reliable transport. I don't see that as the third transport method itself.

    Yes QUIC can do it all, but you are not starting from the right premise (however, I could be wrong).

    Alan

    1. Adrian 4

      Re: Not sure about this

      I think you should re-read the article.

      Yes, current RPCs often use TCP, but it's not a great fit.

      However, I suspect that if HTTP changes to use a protocol that does out-of-order requests and responses, a fair few applications written to use HTTP will break. Asynchronous stuff often seems to need a mindset that doesn't come naturally to application programmers taught to program on local interactive in-order applications where response time is short and consistent.

      1. CrackedNoggin Bronze badge

        Re: Not sure about this

        There is a lot of extra work to write true "asynchronous/parallel" code, but "concurrent" code (i.e., using promises in a single thread) is in some cases(*) perfectly adequate, and far easier to manage. It's also possible to extend "concurrent" to include "parallel" when required, to get the best of both worlds.

        (*When throughput is not limited by the computation power.)

        That's part of the reason for the popularity of Javascript/Typescipt, especially since promises were introduced.

        The mental model follows closely Gantt charts and event chains - it's not that hard - but was much harder before promises.

      2. david 12 Silver badge

        Re: Not sure about this

        I think that web apps have pretty much abandoned in-order comms already. There was a significant amount of stuff that broke when java-script engines became out-of-order processors, but that was 5-10 years ago.

        There are still things that use "HTTP", but it's actually become rare to find anything significant that doesn't require javascript, and modern javascript implies out-of-order processing.

      3. the spectacularly refined chap

        Re: Not sure about this

        However, I suspect that if HTTP changes to use a protocol that does out-of-order requests and responses, a fair few applications written to use HTTP will break.

        The author anticipates this being done as part of any new transport layer as a key requirement. From the article:

        RPC needs to handle lost, misordered, and duplicated messages ... and fragmentation/reassembly of messages must be supported

        I have to agree with AlanSh here, it doesn't seem appropriate to break layers of abstraction for a single use case. There is an argument that a third transport layer is needed, a reliable message passing service. That doesn't need to be intrinsically linked to an RPC mechanism.

        Yes you need error checking, you need packet fragmentation and reassembly. You also want a fire and forget transmission model - "send this few hundred kilobytes to that machine and don't trouble me further unless there is a problem". Whether you layer that on top of or alongside UDP is a judgement call.

        If you go for new protocol at the transport layer that introduces issues issues of its own. First that comes to mind would be just how big can these messages get? Potentially you could end up with tens of megabytes of data inside operating system or firewall buffers - an issue that doesn't arise in the byte stream model where to a first approximation data is consumed as it is received.

        If there's movement towards that I certainly wouldn't trust Google to design it. Past history has shown they tend to have tunnel vision towards their own use cases. Any solution need to scale not only to servers and user devices but small IoT devices with maybe 100kb RAM.

        1. Anonymous Coward
          Anonymous Coward

          Re: Not sure about this

          "If you go for new protocol at the transport layer that introduces issues issues of its own."

          Pssst... my memory's not what it used to be, but my recollection is that the stuff you mention was all doable (and documented and available and testable) in the 1980s in a standards based and vendor independent way, but the processing power (silicon *and* mental) was more than people were generally prepared to pay for. Now the silicon is cheap. But the necessary software engineering principles are nowhere to be seen.

          Back then, part of it came from DECnet, part of it came from OSI higher layers. Maybe some other stuff too - after all, the real point of this stuff is what the data *means*, not how the bits behave as they get from one box to another or one app to another (and in a few years time we'll likely be back in the dark days of one set of proprietary apps talking to the same vendors apps on a different box, with no useful interoperability).

  3. A Non e-mouse Silver badge

    SCTP

    In theory SCTP provides a lot (I accept not all) the features of QUIC. The problem is that SCTP is another protocol on top of IP and so firewalls, etc may block them as they don''t know what to do with them. Hence QUIC was implemented on top of of UDP to allow it to traverse firewalls.

    1. Rich 2 Silver badge

      Re: SCTP

      I was just about to mention SCTP.

      But defining yet another protocol (ie QUIC) seems to be the wrong solution to the problem. The real solution should be to make the routers etc understand SCTP - the manufacturers of such stuff have certainly had long enough to do this - the SCTP spec is years old - RFC 3286 Is dated 2002!!!

      Why can’t We have nice things again???

      1. Michael Wojcik Silver badge

        Re: SCTP

        I suspect the problem is not so much routers per se but other middleboxes: firewall appliances, load balancers, reverse proxies, the NAT components of SOHO routers, TLS terminators, and so on. Things that want to inspect traffic.

        And, yes, not supporting SCTP in those things is regrettable, but it's the usual mutual-dependency problem: Vendors don't support SCTP because few people use it, and few people use it because vendors don't support it.

      2. phuzz Silver badge

        Re: SCTP

        Oblig XKCD.

        (So obligatory that not only can you guess which one it is, you can probably quote the entire thing)

  4. martinusher Silver badge

    TCP isn't a good messaging protocol

    If you look at the most common uses for TCP you find that they include some kind of crude framing, often the three digit FTP request/response codes, to convert what is a simulated full duplex serial stream into a message protocol. This is extraordinarily inefficient. TCP was designed primarily for terminal handling and it relies on innumerable kludges to function reliably, each one of which contributes to surplus traffic and loss of performance. The obvious thing to do about this is to incorporate some kind of block messaging system on top of UDP. It obviously doesn't need to go on top of UDP except that this is convenient -- there are plenty of examples of protocols that were moved from native to UDP so they could be used over the Internet (e.g. SMB, Netware) so there's nothing preventing one from defining a new IP protocol, it will save the minimal overhead of the UDP header, that's all.

    Many -- most, in fact -- programmers that use TCP for communications to a device misuse it. They want the convenience of opening a stream socket and sending/receiving data without giving any thought to what's going on further down the stack. Unfortunately, they invariably treat TCP like a message system -- they send a block of data, they expect to receive it. If anything disturbs this datagram like behavior their software misbehaves so you can't fragment or deal with multiple messages in the same stream, they just don't get the idea that its a byte serial transfer (full duplex as well) that isn't formatted as a message unless the application explicitly does so. They see the word 'reliable' and think that's all they need to know. (...and then there's the whole business of dropped sockets.....)

    1. Richard 12 Silver badge

      Re: TCP isn't a good messaging protocol

      I see developers assuming that one send call ==> one receive call all the time.

      Hardly anyone immediately grasps the fact that's simply not true, especially as it will often "work" on a lot of stacks. Until there's any congestion, or routers in the middle, and then it explodes and they've no idea what happened.

      1. david 12 Silver badge

        Re: TCP isn't a good messaging protocol

        The socket API is blocking. You have a choice of hoping that the message will come in one RX, or waiting forever for the next byte.

        It's an API designed 50 years ago, when servers had only one network card, The socket interface is even more limiting than raw TCP.

  5. DS999 Silver badge

    Had RDP been implemented widely in the 90s

    It would be the base on which all RCP was built instead of TCP/IP. But Unix and Windows only implemented UDP and TCP, not RDP.

    The RFCs date to the 80s and was already updated in 1990 so that could have been done, but there was no use case for reliable datagrams at the time and by the time there was it was too late since everyone was plowing ahead with TCP.

  6. Yes Me Silver badge
    Angel

    TimBL's day job

    When HTTP came along in the early 1990s, it wasn’t trying to solve an RPC problem so much as an information sharing problem, but it did implement request/response semantics.
    Tim Berners-Lee's day job when he designed HTTP was implementing and supporting RPC for physics experiments. He'd known about RPC since at least 1980. It was no coincidence that he implemented request/response.

  7. Anonymous Coward
    Anonymous Coward

    Thanks!

    An interesting article with a point well made about the difference in use cases between stream, datagram, and request(s)/response(s) protocols; and some very pertinent comments too.

  8. samsp

    This article misses what I think the sweet spot is for QUIC which is for use over the last mile for mobile devices. Cellular and wifi can be spotty. QUIC is designed to be able to recover lost packets and not interrupt streams unaffected by the packet loss. A QUIC connection can be migrated between networks and continue streams without each needing to be re-established. So if you loose wifi and switch to cellular, the existing requests will transition and continue.

    So far traffic is better handled over http 1.1/2 than QUIC, but if a CDN is at the network edge, then it can use http/3 to the end users, but backhaul over http/1.1. This is why Akamai and Cloudflare are at the forefront of QUIC deployments. Most routing equipment has been optimized for TCP not UDP, so it will take a while for QUIC to catch up in terms of throughput and latency compared to 1.1.

  9. Anonymous Coward
    Anonymous Coward

    well that was a waste a time reading

    Just a load of bollocks

  10. Anonymous Coward
    Anonymous Coward

    QUIC for RPC

    I get the streams, head-of-line blocking, fast setup and other advantages, but why is QUIC a better fit for RPC?

    If I'm not mistaken, it is still a streaming protocol, so the RPC layer must still add headers to recognize message noundaries

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like