back to article How TCP's congestion control saved the internet

With the annual SIGCOMM conference taking place this month, we observed that congestion control still gets an hour in the program, 35 years after the first paper on TCP congestion control was published. So it seems like a good time to appreciate just how much the success of the internet has depended on its approach to managing …

  1. MrReynolds2U

    Thanks Bruce

    Always a pleasure when we get to read articles by those who helped create or contribute to what we use everyday.

  2. abend0c4 Silver badge

    If you found this interesting....

    ... you might also want to check out an interview with Raj Jain, also an early congestion researcher, over on the RIPE Labs website. It's a reminder that whatever network technologies are "competing for ascendancy", they're all essentially trying to solve the same problem...

    And yes, I do remember CLNP and TP4!

  3. Happy_Jack

    Unfortunately, TCP treats pretty much any delay or packet loss as congestion, which is not always appropriate, particularly over wireless networks. Thankfully we also have UDP for when performance matters.

    1. Anonymous Coward
      Anonymous Coward

      Hence HTTP 3...

      ... which lies on top of UDP now. Whereas up to HTTP 2, all versions were based on TCP.

      Let's see the adoption and the DPI changes this will cause.

      1. dtaht

        Re: Hence HTTP 3...

        I like to hope that libreqos, preseem, etc will make a difference, even in the face of QUIC, and without DPI.

      2. Anonymous Coward
        Anonymous Coward

        Re: Hence HTTP 3...

        Oh dear, that sounds like a security nightmare. It's way easier to man in the middle UDP than TCP...be interesting to see how/if they solve that.

    2. hammarbtyp

      In real time systems where predictability is often more important than delivery guarantee, we avoid TCP like the plague. MODBUS TCP has been largely superseded by MODBUS UDP because when you are controlling large machines the last thing you want is for the protocol to decide to delay sending packets for a while

      1. martinusher Silver badge

        A big problem with TCP is that you can't dissuade programmers from sending datagram type messages using it. They use either ad-hoc framing mechanisms (or often no framing at all, just relying on the de-facto intermessage gap to frame messages). It all stems from people not knowing what goes on behind the curtain -- they grab a socket, create the connection and start sending messages and it invariably works on their desktop. When it proves unreliable in real life they just add more and more kludges to patch it up which ends up grossly inefficient either in traffic or time. If you then try to explain the problem they just mutter the mantra about 'reliable' and you're tuned out.

        The fact the entire Web -- essentially most of the Internet -- is running on this kind of kludgery (FTP framing over TCP stream protocol) -- seems to be one of the worst kept secrets in networking.

  4. alain williams Silver badge

    Another reason for Internet's success ...

    is the RFC process. It is open, lightweight and fast. This means that new or updated protocols could become known quickly and used quickly.

    Contrast this to, eg, the ISO process: large committees, meetings in nice parts of the world and several years before it gets out.

    1. Camilla Smythe

      Re: Another reason for Internet's success ...

      "It is open, lightweight and fast."

      Tell me about this wonderous thing. I may have gone via the wrong route but I tried to write an Internet Draft and hit,

      https://author-tools.ietf.org/idnits

      I had hoped, undoubtably naively, to propose a means to improve Intrnet Routing by imposition, yes I know, of a particular format, yes I also know.

      Unfotunatley nits threw up so many indecipherable complaints about the formatting of my suggestion I decided to let them rot.

      1. Yes Me Silver badge
        Thumb Up

        Re: Another reason for Internet's success ...

        Open - yes.

        Lightweight - not really. It takes a lot of work to get rough consensus on a draft (most drafts fail).

        Fast - not really. It typically takes several years for a large piece of work to get through the process from a first draft to an RFC.

        (And worldwide meetings: the next IETF meeting is in Prague. The following one is in Brisbane.)

        As for the specific issue of idnits, I don't think it's any more picky than other standards organisations or technical publishers. But writing a draft is probably the wrong place to start - writing a rough proposal as email to the relevant IETF working group is generally recommended as the zero'th step. See https://www.ietf.org/how/lists/#wgbof

        1. Camilla Smythe

          Re: Another reason for Internet's success ...

          Thanks for the link. Appreciated.

        2. R Soul Silver badge

          IETF meetings

          the next IETF meeting is in Prague. The following one is in Brisbane.

          So what? It's not necessary to attend them. The IETF makes decisions about protocol standards on its mailing lists, not at its meetings.

          Taking part in IETF meetings can be done on-line and about 35% of the participants do that. In fact almost all interim WG meetings are done on-line.

          Oh and one of the most significant pieces of work done recently by the IETF is DoH: DNS over HTTPS. A new WG was created and published RFC8484 in under a year. Other stuff can and does take longer. Which is understandable. A conservative approach is needed when it comes to making changes to core protocols and/or the Internet architecture: backwards compatibility, impact of new stuff on the installed base, security/privacy issues.

      2. Alan Brown Silver badge

        Re: Another reason for Internet's success ...

        There's a good reason for the nitpicking

        All you have to do is look at the almost perverse creative misinterpretations of older RFCs to understand why formatting and language clarity is important

        (English is almost the worst possible language for writing technical documents)

    2. dtaht

      Re: Another reason for Internet's success ...

      It took 6 years to get fq_codel through the IETF process, which still stings. QUIC has been under development for 10.

    3. abend0c4 Silver badge

      Re: Another reason for Internet's success ...

      It is open, lightweight and fast

      *cough*, IPv6, *cough*

      1. I could be a dog really Bronze badge

        Re: Another reason for Internet's success ...

        The situation with IPv6 is ... complicated.

        IPv4 took off because it solved a problem that people realised existed. Too many people still believe there is no problem with IPv4 and address exhaustion - mostly because someone came up with that horrible cludge of NAPT (or to most people, just NAT). Suddenly "problem solved" and so no-one wants to fix it. The massive borkage created by NAT is invisible to most people thanks to massive amounts of effort put in by many people to work around it's limitations.

        There are indeed valid arguments about IPv6 being too complicated, but in large part that is because we'd seen what IPv4 could do - and more importantly what it couldn't do (either at all, or easily, or well, or perm any combination). Basic case - every IPv6 host has to support multiple addresses on each interface (one of them being a link local address) and understand the concept of having multiple prefixes (akin to IPv4 subnets) on a wire. And it has to support the idea that for some prefixes, it won't be able to talk directly to a neighbour. These are things that are hard to do in IPv4, but which have real-world applications. So this makes things seem more complicated as it's not just a matter of learning to deal with longer addresses. Once you get the hang of it, for basic setups it's not really harder than IPv4, just different. Hence there's a learning curve, hence resistance to "fixing" something that many people don't see as being broken.

        So take-up of IPv6 has been slow - because for many people it solved a problem they didn't know, or didn't want to accept, they had.

        Just don't get me started on how a single company has made it an almost religious thing that network managers/administrators are not allowed to manage their networks with tools like DHCP. Not only does Android not support DHCP for IPv6, but AIUI, Google have made it impossible for any third party to provide a DHCP client. As a result, if you use DHCP then Android devices won't work on your IPv6 network. If you don't then for quite a few business types, they can't comply with the law.

  5. LateAgain

    The biggest reason for it's success

    Is probably the simple fact that it's not proprietary.

    Which meant that anyone at all could have a go at writing an IP stack. And not get sued.

    1. Mike Pellatt

      Re: The biggest reason for it's success

      True except for the competition with ISO. Which did look like it would win through at the time.

      We wrote our our own TP4, initially for the Mac, because the commercials for the only other one available just didn't work.

      1. This post has been deleted by its author

      2. Roland6 Silver badge

        Re: The biggest reason for it's success

        OSI did gain much limelight. however, for it become a real competitor it had to overcome a couple of major hurdles. Firstly, the vendors with proprietary networking definitely saw LAN and WAN as extra cost options and not something that should be in the box; unlike BSD Unix systems where Ethernet LAN was in-the-box and TCP/IP APIs were defined. Secondly, it needed a market, if the US and UK governments had followed through and insisted on GOSIP compliance that would of encouraged vendors to turn their prototype implementations (exhibited at ENEi’88) into real commercial products available across their range. But government sales doesn’t mean wider commercial sales and with both networked BSD workstations and PCs mushrooming, OSI needed to offer something more than TelNet, FTP, SMTP equivalence…

        Interestingly, after the last Systems Approach article I did a Google on Retix, ISODE and was a little surprised that there are sectors that use Retix(*) OSI today.

        I also came across this seemingly authorative site (**) for those interested in the early days of networking: https://historyofcomputercommunications.info/

        > We wrote our our own TP4, initially for the Mac

        Was this the Touch Communications implementation?

        (*) Now Xelas Software

        (**) As one of the team that put the event together, I don’t disagree with their chapter on this event.

  6. John Riddoch

    Ah, ATM

    We were being told in 1997 by our lecturer that ATM was the way forward and Ethernet would die out. In those days, thin-net was still used in our computer labs and anything RJ-45 related would probably connect to a dumb hub rather than a switch. Ethernet survived by adapting into being switched by default (it had already started in those days and accelerated as costs came down) and avoiding the worst issues with congestion that it suffered on coax/hub deployments and getting faster, so the advantages of ATM weren't as clear as they had been.

    1. Yes Me Silver badge

      Re: Ah, ATM

      Ethernet survived by totally changing its nature, except for the layer 2 API.

      ATM's design (cells with a 48 byte payload) was ridiculously inappropriate for TCP/IP and only marginally sensible for Plain Old Telephone Service multiplexing.

      1. Mike Pellatt

        Re: Ah, ATM

        And yet ATM survives in all our xDSL connections :-)

        1. Anonymous Coward
          Anonymous Coward

          Re: Ah, ATM

          Thanks Pornhub.

      2. kirk_augustin@yahoo.com

        Re: Ah, ATM

        ATM only needed a small but consistent frame size because you did not do all the handshaking for each frame, so then could quickly and easily send as many frames as you wanted to. So then frame size is irrelevant. And while some may like the variable packet size of TCP, in reality all virtual packets are always actually transferred by a consistent physical frame. So all the TCP variable packets do is add lots of run time overhead.

    2. hammarbtyp

      Re: Ah, ATM

      The reason ATM was put forward was because the 48 byte packet could be switched very quickly by the hardware of the day, meaning it could support many channels.

      What changed was that hardware got faster and cheaper, meaning the need for hardware optimised data flows went away. So there was no need for a dedicated switch infrastructure

      1. Yet Another Anonymous coward Silver badge

        Re: Ah, ATM

        I thought the 48 bytes was because the telecoms people wanted 32bytes for low latency and the data people wanted 64bytes for better throughput - so they compromised

        1. kirk_augustin@yahoo.com

          Re: Ah, ATM

          True the 48 bytes is a compromise, but frame size does not at all matter when you only handshake to establish a permanent virtual circuit once before the first frame. Since there is no overhead for the following frames, then frame size is fairly irrelevant. You just send more frames instead of making frames larger. The only time it matters is with long distance satellite communications, where there is such huge transmission latencies.

      2. kirk_augustin@yahoo.com

        Re: Ah, ATM

        ATM is still used for anything where speed matters because TCP is almost 10 times slower. The military, aviation, cellphones, cars, financial institutions, etc., all do not use TCP.

        The only people who do use TCP are the ones who don't really care about how bad TCP is, and simply want plug and play compatibility.

        1. veti Silver badge

          Re: Ah, ATM

          The only people who do use TCP are the ones who don't really care about how bad TCP is, and simply want plug and play compatibility.

          Which is to say, about 98% of everybody.

    3. Annihilator

      Re: Ah, ATM

      Weird, I came here to post the same thing anecdote but my lecture reference was around 2001. Don't suppose you went to Edinburgh University with Gordon Brebner as the lecturer?...

    4. toejam++

      Re: Ah, ATM

      The trend towards full-duplex switched Ethernet ports really helped, but so did lower costs and the introduction of QoS over Ethernet. The project to migrate our campus from IP and IPX over Ethernet to ATM came to a grinding halt once Gigabit Ethernet kit started hitting the market. Our shop ended up with some 3Com Superstack and Corebuilder switches that offered a lot more value than the ATM kit on the market. It also helped that IP-based PBX systems started getting decent and that we could set the telephony VLANs to a higher QoS priority than the standard VLANs across the backbone links.

  7. Tanj

    WAN congestion is not DC congestion

    The success of congestion control on the WAN caused a lot of trouble inside the DC. I was once working on a larger map-reduce project where problems were routinely assumed to be due to incast overload, but investigations never actually proved that but they did show pretty much every failure in response time was due to congestion protocols killing a flow.

    The situation is a bit better these days with faster congestion control, at least the glitches are shorter, but I wonder how much better it would be if we just used a tight transmission window in TCP scaled to match the microsecond latencies and high quality paths inside a DC. Just have the senders run out of tokens when the path delays get longer due to congestion. It is how TCP worked back in the dawn times and it won the competition against other methods, which is why it survived. The modern DC is scaled much more like those early networks, in terms of packets in flight on a path, than like the WAN where sending packets is like pushing limp spaghetti.

    Of course, certain other conveniences would need to be reconsidered. Like moving the ACKs down into the NIC and not waiting for piggyback on responses, so that the transmit window tokens are used to pace the transport, not the service. More like InfiniBand.

    1. dtaht

      Re: WAN congestion is not DC congestion

      I often wonder the same about incast. I am a big believer in the power of fair queuing to avoid starvation. The so-famed incast workload, had DCTCP arrive with a modification to ECN to provide multicast signalling, and the two memes have ridden together without much investigation, aside that letting dctcp loose outside the DC fills some with terror.

      I, instead have stuck with conventional RFC3168, leveraging fq_codel, but with the target set to as low as can be achieved (50us) in software. It seems to scale properly down to a MTU in simulation, no matter how big the load. Someday perhaps, someone will reproduce the results we have been getting from that method in some popular publication. https://blog.cerowrt.org/post/juniper/

      There is a lot of good work going in Linux on pushing tcp to unheard of single flow heights. Recently Eric Dumazet (who deserves fame!) cracked 120Gbit for a single flow.

    2. kirk_augustin@yahoo.com

      Re: WAN congestion is not DC congestion

      The solution is to establish permanent virtual circuits ahead of time, like ATM does. Then both sides agree how many uniform frames will be sent ahead of time, and the overhead and delays essentially disappear.

  8. zootle

    Every dog has his day!

    Back in the 90s, I worked on a Nortel (remember them?) ATM switch that had become extremely popular as a replacement for large PABXs. A telephone exchange you could stick in a closet with no special power and cooling requirements! They sold by the shipload, replacing floor-sized exchanges with a small box on each floor.

    Then along came IP telephony and the rest is, as they say, history.

    1. hammarbtyp

      Re: Every dog has his day!

      Ditto Ericsson AXD series. However it did help spawn Erlang, so not all was lost

    2. Anonymous Coward
      Anonymous Coward

      Re: Every dog has his day!

      Ah yes, the short term Oasis we had in telephony between massive PABX kit and VOIP.

      Even though I actively avoid getting involved in telephony these days, I miss simple on-prem phone systems...that extension maps to that socket...when people starting moving to systems like Asterisk, I was out...the complexity sky rocketed and maintaining these systems was a massive ballache by comparison.

  9. D. Evans

    ATM was bad from the start.

    I'm retiring in 2026, and been lying about my IT skills for 3 decades, but still scamming a paycheck. Anyone with any honesty understood why ATM would fail, did fail, and find anyone who supported ATM to be to biased to ever understand why it was bad.

    As a simple example as why to why: think of a relay race and a single baton,and you only have one, but multiple teams need it. Instant lock-up.

    Oversimplification/abstraction will lead to rubbish implementations.

    And I'm really glad the OSI model didn't die before I went to university. I was in my 30s and new it was shit then. See my last paragraph.

    1. Anonymous Coward
      Anonymous Coward

      Re: ATM was bad from the start.

      You can't really lie about your IT skills man...nobody in this space knows everything. What sets a good IT guy apart from a shitty one is a balanced blend of bravery and sheer madness.

      I've made a career out of supporting things that nobody else will touch with non-existant / piss poor documentation...projects that have drifted because devs got bored or missed the point etc etc...or setups that were built by one man band autistic lunatics on the cheap, fucking mental CTOs on an ego trip etc etc...it's very rare for me to enter into a contract and have some sort of clue about anything...what I do know is that once everything is fixed and tickety boo, the client will immediately start looking for the next team to fuck them over and I end up having to move on...I'm the "Littlest Hobo" of tech...I swear my career has essentially been the tech equivalent of Quantum Leap. Each episode is a different client, with different problems and ends with me walking off to the next client with different issues.

      My formative years in tech were as a kid in the late 80s / early 90s, before the internet and search engines, which involved a lot of stripping things down then figuring out how to put them back together again and finding features that had been locked off or hidden by the manufacturers, hacking software/firmware with hex editors etc etc...it was all trial and error and reverse engineering which are not a methods that are not commonly used these days...skills that have served me well for nearly 3 decades...because as yet, there has been nothing I haven't been able to reverse engineer and fix...it's been getting a lot easier in recent years, especially with "web" technologies...because the trend towards using frameworks etc has made a lot of projects more alike than they are different and a lot less complicated than software used to be...yeah, yeah...I can feel the developers out there...I know you're all unicorns with a genuinely unique skillset, every single one of you is the best developer on Earth.../s.

      You haven't been lying about your skills, you just have an uncommon skill set that doesn't come with some sort of formal credential. You've lasted 30 years "winging it" because of your skills.

  10. dtaht

    VJ has made many more contributions

    Van Jacobson and Kathie Nichols cracked the congestion control problem even further with the publication of https://queue.acm.org/detail.cfm?id=2209336 the codel AQM algorithm.

    The fq_codel variant (RFC8290) is in many (but not enough) routers today, the default in linux, and the fq part at least, is part of all of Apple´s products.

    VJ further was part of the BBR effort. Without him, what would the future have looked like?

  11. hammarbtyp

    TSN

    It's interesting that the big issue was the need to actively define the congestion requirements rather than allow each link to manage its own congestion control.

    We had a similar issue with TSN, which sounded great until we found we needed to map the network requirements 1st. This is fine on a static system (like a car), but more difficult in a more dynamic system

  12. kirk_augustin@yahoo.com

    TCP - terrible control protocol

    TCP is awful and all other transfer protocols are a lot better. But TCP was the military and educational standard for so long that nothing else ever had any chance as a standard. But anyone with an option, like financial institutions, the modern military, cellphones, aviation, automotive, etc., would never use TCP. Frame Relay came from X.25, and later morphed into ATM. It is vastly superior to TCP, about 5 times faster, and much less difficult to implement. To support TCP/IP, you have to implement ancient libraries like Veronica, Archie, FTP, Gopher, etc. It is a kitchen sink approach instead of an optimized approach.

    The main functional difference is that with ATM, all the frames are the same size and you do the overhead of handshaking a connection only once.

    While with TCP, there is no packet size standard, and you have to do all the handshaking back and forth each and every time you send any packet at all.

    TCP is immensely more complex, slow, and prone to crashes.

    Probably about a whole order of magnitude slower and less reliable.

    1. pulcra

      Re: TCP - terrible control protocol

      TCP destroyed the Internet, I concur.

      At the application level let it be that a "connection" is modelled implicitly -depending on and optimised to an application's needs- and stick with UDP or whatever for the actual transport. Do not delegate such logic to the transport layer!

  13. Anonymous Coward
    Anonymous Coward

    XUNET

    Bell Labs located in Murray Hill, NJ, USA had XUNET (similar to ARPANET) over Datakit. It connected them to to the University of California Berkeley (UCB). Specifically a VAX-11/750 named Monet. You can see the source code, it's in the CSRG ISOs.

    I found two research papers about XUNET.

    One thing I would like to know is if Datakit can operate point to point without a switch. It would help with retrocomputing. The protocol is available in UNIX v8 which is publicly available and can be run on SIMH. Datakit souce code is also in UNIX v,8.

    1. Anonymous Coward
      Anonymous Coward

      Re: XUNET

      found another one:

      TUHS/UA_Distributions/Research/Norman_v10/usr/src/cmd/nupas/config/112clients/xunetroute

      TUHS/UA_Distributions/Research/Norman_v10/usr/src/cmd/nupas/config/research/xunetroute.res

      That and the two papers:

      Delay and Throughput Measurements of the XUNET Datakit Network

      Thomas VandeWater

      EECS Department

      University of California, Berkeley

      Technical Report No. UCB/CSD-88-474

      November 1988

      https://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/6057.html

      It has diagrams of the network architecture.

      Performance Analysis of Queueing Algorithms on Datakit T1 Trunk Lines

      By University of California, Berkeley. Computer Science Division, Michael J. Hawley · 1989

      https://digitalassets.lib.berkeley.edu/techreports/ucb/text/CSD-89-547.pdf

      I think on VAX-11/780 a "serial port accelerator" KMC11 was used for Datakit.

  14. MichaelGordon

    Why OSI didn't take loff

    A big reason why OSI never really got anywhere was that it was just far too big and complicated. Cut-down versions of some of its ideas, such as SNMP and LDAP, have seen widespread use, but the majority of the OSI stack died a well-deserved death, serving only as an example of what happens when you turn technical matters over to massive committees and their associated bureaucracy.

  15. rcxb Silver badge

    Incredible backbone speeds

    One reason that I give credit to congestion control algorithms in explaining the success of the internet is that the path to failure of the internet was clearly on display in 1986. Jacobson describes some of the early congestion collapse episodes, which saw throughput fall by three orders of magnitude.

    The reason the internet didn't collapse is that the backbones were able to continue to get drastically faster. More than just keeping up with traffic growth. That has ensured that the vast majority of the distance of your data travels, is entirely free of congestion, with congestion only an issue at the last mile. This reality is very different than the old 1980s assumptions of telcom networks, that we needed protocols that would handle heavy congestion all the time.

  16. TDog

    TCP / IP

    For me when I was doing this in the 1980's it was very simple. It only had four protocol layers and that was easier to remember. Oh, yes, and it made sense.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like