back to article Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless …

  1. Warm Braw Silver badge

    A’s improvement is not fairly gained

    The issue here is that you're essentially relying on the providers of transport stacks to "do the right thing". It's a fundamental of connectionless network layers that it's up to the endpoints to police the rate at which they throw packets into the network and there's only the most basic feedback (discarded packets or, at best, a congestion flag) to indicate the rate may be too high in relation to other traffic. For any individual endpoint, the best response to that indication may well be to increase its traffic rate by sending out multiple copies of the same packets, increasing the chance - relative to well-behaved endpoints - that at least one may survive the queue drops that will occur at the point of congestion and there's no real defence against that.

    Existing congestion algorithms don't really fare well if there are multiple network paths - the connection will get throttled to the speed of the most congested - and the transport protocol itself doesn't really lend itself to, for example, prioritising latency over reliability (actively encouraging routers to drop rather than queue packets).

    Where you have more control over the behaviour of the network layer (knowledge of or even control over bandwidth, latency, etc) and perhaps some notion of resource reservation you can clearly do "better" than in the case of the Internet at large - but that may well mean new transport protocols as well as different traffic-control algorithms.

    And if you're going that far, some distributed ingress control (to put a lid on DDoS) might be worth considering too.

  2. Graham Cobb Silver badge

    Too much emphasis on throughput

    Note: I have not yet read the book or other papers related to this... I am just reacting to the article.

    The article seems to be focused too much on throughput and not enough on delay (latency) - and even less so on jitter and stability. Delay (and jitter) - and their related fairness issues - are much harder problems to solve than throughput and, arguably, much more critical in both human and commercial data processing problem domains. For example, users may be much more tolerant of quality issues in a movie than they are at slow presentation of a web page due to lots of small interactions with various bits of javascript, and tiny delays in approving credit card transactions accumulate to big impact on costs for retailers.

    Edge features (particularly CDN-like capabilites) help massively with throughput (which moves that problem to being as much a caching problem as a TCP problem). Particularly for delivery cases (video download, etc).

    Edge can, of course, help with delay - but only in cases where the edge can have enough information/authority to create a local response (so, things like user authorization may work much better than checking whether there is enough money left in your bank account).

    Ultimately speed-of-light impact cannot be avoided in some transactions, and are caused by scenarios where some data must be co-ordinated and updated by some distributed databases (so, for example, in my days working on Prepaid Charging systems we used things like approving a transaction locally if there seemed to be plenty of money in the local copy of the user's wallet but doing a full dstributed transaction commit - which takes a long time across the breadth of the US - if the balance was nearly exhausted).

    Jitter and stability can be equally hard problems, particularly if they lead to systematic unfairness (for example, one set of customers have much more variability in their transaction time than another set). Or visible "quality" problems (like audio quality in calls) - codecs try to help but all have tradeoffs/limits.

    1. Warm Braw Silver badge

      Re: Too much emphasis on throughput

      I am just reacting to the article

      Have to say it's interesting not only that Raj Jain's paper is still regarded as a baseline almost 40 years later, but that (at least) two thirds of the comments on this article so far come from his contemporaries at Digital. The company certainly cast a long shadow...

      1. Graham Cobb Silver badge

        Re: Too much emphasis on throughput

        Or just employed people who liked to talk too much?

        1. Warm Braw Silver badge

          Re: Too much emphasis on throughput

          Or are now elderly and have time on their hands...

  3. Eclectic Man Silver badge


    The basic idea behind the philosophy of Utilitarianism is 'the greatest possible benefit for the greatest number of people'. However, that allows for a great deal of harm to be done to a 'small number' of people, and would allow slavery. The approach of 'least harm to anyone' seems to me to be a better approach, but with a network carrying different sorts of traffic, from VOIP conversations and video conferencing, to downloading pdf's, determining the amount of harm done to any particular participant may be very difficult. Clearly there is more research to be done. I too have not read the papers referenced (they are a bit too long for a quick afternoon's reading), but look interesting.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like